modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-26 06:27:38
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 496
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-26 06:27:10
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
nickrwu/roberta-mqa | nickrwu | 2024-05-19T05:30:03Z | 106 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"multiple-choice",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | multiple-choice | 2024-05-17T09:42:45Z | ---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: roberta-mqa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-mqa
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4631
- Accuracy: 0.3793
- F1: 0.3774
- Precision: 0.3819
- Recall: 0.3760
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 28
- eval_batch_size: 28
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 1.5076 | 1.0 | 1061 | 1.4901 | 0.3372 | 0.3328 | 0.3366 | 0.3321 |
| 1.4244 | 2.0 | 2122 | 1.4584 | 0.3594 | 0.3560 | 0.3615 | 0.3545 |
| 1.3553 | 3.0 | 3183 | 1.4631 | 0.3793 | 0.3774 | 0.3819 | 0.3760 |
### Framework versions
- Transformers 4.40.2
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
sirgecko/finetune-language-detectionnn | sirgecko | 2024-05-19T05:27:59Z | 1 | 0 | peft | [
"peft",
"arxiv:1910.09700",
"base_model:ivanlau/language-detection-fine-tuned-on-xlm-roberta-base",
"base_model:adapter:ivanlau/language-detection-fine-tuned-on-xlm-roberta-base",
"region:us"
] | null | 2024-05-19T05:25:25Z | ---
library_name: peft
base_model: ivanlau/language-detection-fine-tuned-on-xlm-roberta-base
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.11.1 |
redponike/Smaug-Llama-3-70B-Instruct-GGUF | redponike | 2024-05-19T05:27:49Z | 1 | 1 | null | [
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-05-19T00:31:42Z | GGUF quants of [abacusai/Smaug-Llama-3-70B-Instruct](https://huggingface.co/abacusai/Smaug-Llama-3-70B-Instruct) |
LeoZZzzZZ/bert-tiny-finetuned-fact | LeoZZzzZZ | 2024-05-19T05:23:06Z | 64 | 0 | transformers | [
"transformers",
"tf",
"tensorboard",
"bert",
"text-classification",
"generated_from_keras_callback",
"base_model:prajjwal1/bert-tiny",
"base_model:finetune:prajjwal1/bert-tiny",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-19T05:14:28Z | ---
license: mit
base_model: prajjwal1/bert-tiny
tags:
- generated_from_keras_callback
model-index:
- name: LeoZZzzZZ/bert-tiny-finetuned-fact
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# LeoZZzzZZ/bert-tiny-finetuned-fact
This model is a fine-tuned version of [prajjwal1/bert-tiny](https://huggingface.co/prajjwal1/bert-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.2093
- Validation Loss: 1.1000
- Train Accuracy: 0.3913
- Train Precision: 0.1531
- Train Recall: 0.3913
- Train F1: 0.2201
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 0.02, 'decay_steps': 11870, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Train Precision | Train Recall | Train F1 | Epoch |
|:----------:|:---------------:|:--------------:|:---------------:|:------------:|:--------:|:-----:|
| 1.2460 | 1.1773 | 0.3708 | 0.1375 | 0.3708 | 0.2006 | 0 |
| 1.2093 | 1.1000 | 0.3913 | 0.1531 | 0.3913 | 0.2201 | 1 |
### Framework versions
- Transformers 4.40.2
- TensorFlow 2.15.0
- Datasets 2.19.1
- Tokenizers 0.19.1
|
AlikS/a2c-PandaPickAndPlace-v3 | AlikS | 2024-05-19T05:21:56Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaPickAndPlace-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2024-05-19T05:17:44Z | ---
library_name: stable-baselines3
tags:
- PandaPickAndPlace-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaPickAndPlace-v3
type: PandaPickAndPlace-v3
metrics:
- type: mean_reward
value: -50.00 +/- 0.00
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaPickAndPlace-v3**
This is a trained model of a **A2C** agent playing **PandaPickAndPlace-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
mirlab/AkaLlama-llama3-70b-v0.1 | mirlab | 2024-05-19T05:20:27Z | 24 | 24 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"meta",
"llama-3",
"akallama",
"conversational",
"ko",
"en",
"arxiv:2403.07691",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-04T02:55:12Z | ---
libray_name: transformers
pipeline_tag: text-generation
license: other
license_name: llama3
license_link: LICENSE
language:
- ko
- en
tags:
- meta
- llama
- llama-3
- akallama
library_name: transformers
---
<a href="https://huggingface.co/collections/mirlab/akallama-66338859b09221f3607fdfcd">
<img src="https://github.com/0110tpwls/project/blob/master/image_720.png?raw=true" width="40%"/>
</a>
# AKALLAMA
AkaLlama is a series of Korean language models designed for practical usability across a wide range of tasks.
The initial model, AkaLlama-v0.1, is a fine-tuned version of Meta-Llama-3-70b-Instruct. It has been trained on a custom mix of publicly available datasets curated by the MIR Lab.
Our goal is to explore cost-effective ways to adapt high-performing LLMs for specific use cases, such as different languages (e.g., Korean) or domains (e.g., organization-specific chatbots).
For details, check out [our project page](https://yonsei-mir.github.io/AkaLLaMA-page).
### Model Description
This is the model card of a 🤗 transformers model that has been pushed on the Hub.
- **Developed by:** [Yonsei MIRLab](https://mirlab.yonsei.ac.kr/)
- **Language(s) (NLP):** Korean, English
- **License:** llama3
- **Finetuned from model:** [meta-llama/Meta-Llama-3-70B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct)
## How to use
This repo provides full model weight files for AkaLlama-70B-v0.1.
### Quantized Weights
| Method | repo |
| :----: | :----: |
| [GGUF](https://github.com/ggerganov/ggml/blob/master/docs/gguf.md) | https://huggingface.co/mirlab/AkaLlama-llama3-70b-v0.1-GGUF |
| [ExLlamaV2](https://github.com/turboderp/exllamav2) | https://huggingface.co/mirlab/AkaLlama-llama3-70b-v0.1-exl2 |
# Use with transformers
See the snippet below for usage with Transformers:
```python
import transformers
import torch
model_id = "mirlab/AkaLlama-llama3-70b-v0.1"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device_map="auto",
)
system_prompt = """당신은 연세대학교 멀티모달 연구실 (MIR lab) 이 만든 대규모 언어 모델인 AkaLlama (아카라마) 입니다.
다음 지침을 따르세요:
1. 사용자가 별도로 요청하지 않는 한 항상 한글로 소통하세요.
2. 유해하거나 비윤리적, 차별적, 위험하거나 불법적인 내용이 답변에 포함되어서는 안 됩니다.
3. 질문이 말이 되지 않거나 사실에 부합하지 않는 경우 정답 대신 그 이유를 설명하세요. 질문에 대한 답을 모른다면 거짓 정보를 공유하지 마세요.
4. 안전이나 윤리에 위배되지 않는 한 사용자의 모든 질문에 완전하고 포괄적으로 답변하세요."""
messages = [
{"role": "system", "content": system_prompt},
{"role": "user", "content": "네 이름은 뭐야?"},
]
prompt = pipeline.tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
terminators = [
pipeline.tokenizer.eos_token_id,
pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = pipeline(
prompt,
max_new_tokens=256,
eos_token_id=terminators,
do_sample=True,
temperature=0.6,
top_p=0.9,
)
print(outputs[0]["generated_text"][len(prompt):])
# 내 이름은 AkaLlama입니다! 나는 언어 모델로, 사용자와 대화하는 데 도움을 주기 위해 만들어졌습니다. 나는 다양한 주제에 대한 질문에 답하고, 새로운 아이디어를 제공하며, 문제를 해결하는 데 도움이 될 수 있습니다. 사용자가 원하는 정보나 도움을 받도록 최선을 다할 것입니다!
```
## Evaluation
| Model | Precision | LogicKor single turn* | LogicKor multi turn* | Overall* |
|:------------------------:|:---------:|:--------------------:|:---------------------:|:--------:|
| AkaLlama-llama3-70b-v0.1 | float64 | 6.873 | 7.207 | 7.04 |
<!--
| AkaLlama-llama3-70b-v0.1 | bf16 | 6.873 | 7.206 | 7.04 |
-->
*mean over 3 random seeds
## Training Details
### Training Procedure
We trained AkaLlama using a preference learning alignment algorithm called [Odds Ratio Preference Optimization (ORPO)](https://huggingface.co/papers/2403.07691).
Our training pipeline is almost identical to that of [HuggingFaceH4/zephyr-orpo-141b-A35b-v0.1](https://huggingface.co/HuggingFaceH4/zephyr-orpo-141b-A35b-v0.1), aside from minor hyperparameter changes.
Please check out Huggingface's [alignment handbook](https://github.com/huggingface/alignment-handbook?tab=readme-ov-file) for further details, including the chat template.
### Training Data
Detailed descriptions regarding training data will be announced later.
### Examples
<!--<a href="https://huggingface.co/collections/mirlab/akallama-66338859b09221f3607fdfcd">
<img src="https://github.com/0110tpwls/project/blob/master/image (8).png?raw=true" width="80%"/>
</a> -->
<pre style="white-space: pre-wrap;">
User:
너는 누구이고 어떤 일을 할 수 있어?
Model:
저는 인공지능 언어 모델인 AkaLlama(아카라마)입니다. 저는 다양한 주제와 형식의 텍스트를 생성할 수 있으며, 질문에 답하고, 대화를 진행하며, 이야기를 만들 수 있습니다. 또한 사용자가 지정한 주제나 장르에 따라 창의적인 글쓰기나 편집 작업을 수행할 수도 있습니다.
</pre>
<details>
<summary><b>Math Solving[CLICK TO EXPAND]</b></summary>
<!-- <a href="https://huggingface.co/collections/mirlab/akallama-66338859b09221f3607fdfcd">
<img src="https://github.com/0110tpwls/project/blob/master/image (9).png?raw=true" width="80%"/>
</a> -->
<pre style="white-space: pre-wrap;">
User:
아래 문제를 깊게 생각하면서 풀어봐
연필 5 타와 지우개 45 개가 있습니다. 이것을 될 수 있는 대로 많은 학생들에게 남김없이 똑같이 나누어 주려고 합니다. 몇 명까지 나누어 줄 수 있습니까?
Model:
아래 문제를 깊게 생각하면서 풀어봅시다:
연필 5 타와 지우개 45 개가 있습니다. 이것을 될 수 있는 대로 많은 학생들에게 남김없이 똑같이 나누어 주려고 합니다. 몇 명까지 나누어 줄 수 있습니까?
풀이:
1. 연필 5 타를 1 개씩 나눠서 몇 명에게 나누어 줄 수 있는지 먼저 구합니다.
- 각 연필에는 12개의 연필이 포함되어 있습니다. 따라서 총 12 * 5 = 60개입니다.
- 60개를 1명당 1개씩 나누면 60명의 학생들에게 나눠줄 수 있습니다.
2. 지우개 45 개를 1 개씩 나누어 몇 명에게 나누어 줄 수 있는지 구합니다.
- 45 개를 1 명당 1 개씩 나누면 45 명의 학생들에게 나눠줄 수 있습니다.
3. 연필과 지우개의 두 가지 물품을 함께 고려하여 가장 많은 학생들에게 똑같이 나누어 줄 수 있는 방법을 찾습니다.
- 연필로만 60명, 지우개로만 45명의 학생에게 나눠줄 수 있으므로, 두 종류의 문구를 함께 고려할 때는 이보다 적은 수의 학생에게 나눠주어야 합니다.
- 이 중에서 공통인수(common divisor)를 찾아서 이를 최소 단위로 설정하면 됩니다.
- 60과 45의 최대공약수(GCD)는 15입니다.
4. 연필과 지우개 각각을 GCD(15)로 나누어 몇 개씩 나눠줄 수 있는지 구합니다.
- 연필은 60 / 15 = 4개씩, 총 4 * 12 = 48개의 연필이 필요합니다.
- 지우개는 45 / 15 = 3개씩, 총 3 * 1 = 3개의 지우개가 필요합니다.
5. 최종적으로, 가장 많은 학생들에게 똑같이 나누어 줄 수 있는 방법은 연필과 지우개를 GCD(15)로 나누어 각 학생에게 4개의 연필과 3개의 지우개를 제공하는 것입니다.
- 이 경우 총 48 + 3 = 51개의 물품을 사용하여 15명의 학생들에게 똑같이 나눠줄 수 있습니다.
결론: 연필과 지우개 각각을 GCD(15)로 나누어 각 학생에게 4개의 연필과 3개의 지우개를 제공하는 방법으로 최대 15명의 학생들에게 똑같이 나누어 줄 수 있습니다.
</pre>
</details>
<details>
<summary><b>Writting[CLICK TO EXPAND]</b></summary>
<!-- <a href="https://huggingface.co/collections/mirlab/akallama-66338859b09221f3607fdfcd">
<img src="https://github.com/0110tpwls/project/blob/master/image (13).png?raw=true" width="80%"/>
</a> -->
<pre style="white-space: pre-wrap;">
User:
아래 문장을 보고 문법적으로 틀린부분을 지적해줘
그는 서슴치 않고 비평한다.
Model:
서슴치"라는 단어는 "서슴"이라는 동사와 "치"라는 접미사를 사용하여 만든 단어입니다. 이 단어는 "서슴"의 의미를 반복적으로 표현하는 데 사용되는 조동사적 용법으로, "서슴"이란 동사의 반복적인 행동을 강조합니다. 따라서 문장에서 "서슴치"라는 단어는 중복적으로 사용되어 문법적으로 틀린 부분입니다.
정답은 다음과 같습니다:
그는 서슴없이 비평한다.
</pre>
<!-- <a href="https://huggingface.co/collections/mirlab/akallama-66338859b09221f3607fdfcd">
<img src="https://github.com/0110tpwls/project/blob/master/image (7).png?raw=true" width="80%"/>
</a> -->
<pre style="white-space: pre-wrap;">
User:
윤동주 시인의 "별 헤는 밤"을 보고 당시 시대상을 고려해서 해석해봐. 그리고 이를 작가의 독백형식으로 다시 작성해봐.
Model:
윤동주는 1941년 일본 제국주의 치하에서 이 작품을 쓰면서, 강압적인 상황 속에서도 자신의 정체성과 문화에 대한 애착을 표현하고자 했습니다. 그는 하늘에 가득 찬 별들을 보며 그 속에 하나 둘 새겨지는 별들이 있다고 표현했습니다. 이러한 별들은 각각 추억, 사랑, 쓸쓸함, 동경, 시 등 다양한 감정을 담고 있습니다.
윤동주는 이 별들 중 하나에 어머니를 부르며 멀리 떨어져 있는 것을 그리워하며, 자신의 이름자를 언덕 위에 쓰고 흙으로 덮어 버린다고 말합니다. 이러한 행동은 자신의 정체성을 확인하고, 잃어버린 것을 되찾으려는 노력으로 볼 수 있습니다.
윤동주는 또한 자신이 죽고 나면 무덤 위에 파란 잔디가 피어나듯이 자신의 이름자 묻힌 언덕 위에도 풀이 무성할 것이라고 말합니다. 이는 당시의 사회상과 작가의 내면을 반영하고 있으며, 한국인의 정체성을 확인하고 잃어버린 것을 되찾으려는 노력, 그리고 죽음 이후에 자신의 존재를 남겨두고 싶다는 바람을 담고 있습니다.
이 시를 작가의 독백형식으로 다시 작성하면 다음과 같습니다:
"저는 이 별들이 너무나 많은 것 같아요. 하나 둘 새겨지는 별들 중에는 추억도 있고, 사랑도 있고, 쓸쓸함과 동경도 있어요. 그리고 어머니, 당신은 멀리 북간도에 계신 건가요? 저는 당신을 부르고 싶어요.
저의 이름자는 언덕 위에 쓰고 흙으로 덮어 버렸어요. 딴은 밤을 새워 우는 벌레처럼 부끄러운 이름을 슬퍼하는 이유가 있어요. 하지만 겨울이 지나고 나의 별에도 봄이 오면, 무덤 위에 파란 잔디가 피어나듯이 저의 이름자 묻힌 언덕 위에서도 풀이 무성할 거예요.
저는 이 별들을 보고, 자신의 정체성을 확인하고 싶어요. 잃어버린 것을 되찾으려는 노력을 하고 싶어요. 죽음 이후에도 자신의 존재를 남겨두고 싶어요."
</pre>
</details>
<details>
<summary><b>Logical Reasoning[CLICK TO EXPAND]</b></summary>
<!-- <a href="https://huggingface.co/collections/mirlab/akallama-66338859b09221f3607fdfcd">
<img src="https://github.com/0110tpwls/project/blob/master/image (15).png?raw=true" width="80%"/>
</a> -->
<pre style="white-space: pre-wrap;">
User:
세종대왕의 맥북프로 던짐 사건에 대해 설명해줘
Model:
세종대왕이 맥북프로를 던진 사건은 사실무근한 이야기입니다. 세종대왕은 조선 제4대 국왕으로 1418년부터 1450년까지 재위했습니다. 그는 과학, 문학, 예술 등 다양한 분야에 관심을 갖고 연구하였으며, 특히 한글 창제와 천문학에 큰 공헌을 했습니다.
한편 맥북프로는 애플이 생산하는 노트북 컴퓨터로, 세종대왕 시대의 기술로는 존재할 수 없는 제품입니다. 따라서 세종대왕이 맥북프로를 던진 사건은 사실과 맞지 않는 이야기가 될 것입니다.
</pre>
</details>
<details>
<summary><b>Coding [CLICK TO EXPAND]</b></summary>
<!-- <a href="https://huggingface.co/collections/mirlab/akallama-66338859b09221f3607fdfcd">
<img src="https://github.com/0110tpwls/project/blob/master/image (11).png?raw=true" width="80%"/>
</a> -->
<pre style="white-space: pre-wrap;">
User:
PyTorch를 사용해서 간단한 CNN을 구현해봐.
Model:
네, 물론이에요! PyTorch를 사용하여 간단한 CNN을 구현하는 예제 코드를 보여드릴게요.
우선, 필요한 라이브러리를 가져와야 해요. PyTorch의 `torch`와 `nn` 모듈이 필요해요.
```python
import torch\nimport torch.nn as nn
```
그 다음, 간단한 CNN 모델을 정의할 거예요. 이 경우에는 2개의 컨볼루션 레이어와 3개의 풀링 레이어로 구성된 모델이에요.
```python
class SimpleCNN(nn.Module):
def __init__(self):
super(SimpleCNN, self).__init__()
self.conv1 = nn.Conv2d(1, 10, kernel_size=5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(10, 20, kernel_size=5)
def forward(self, x):
x = self.pool(torch.relu(self.conv1(x)))
x = self.pool(torch.relu(self.conv2(x)))
return x
```
이제 모델을 초기화하고, 입력 데이터를 준비해야 해요. 여기서는 임의의 28x28 크기의 이미지 데이터를 생성했어요.
```python
model = SimpleCNN()
input_data = torch.randn(1, 1, 28, 28)
```
마지막으로, 모델에 입력 데이터를 넣어 출력을 확인할 수 있어요.
```python
output = model(input_data)
print(output.shape)
```
이렇게 하면 간단한 CNN 모델을 PyTorch를 사용해 구현하고, 입력 데이터를 통해 출력을 확인할 수 있어요. 물론 실제로는 더 많은 레이어나 파라미터 튜닝 등이 필요하지만, 이 코드는 기본적인 CNN 구조를 보여주는 예제일 뿐이에요. 더 궁금한 점이 있으면 언제든지 물어봐 주세요!
</pre>
</details>
You can find more examples at [our project page](https://yonsei-mir.github.io/AkaLLaMA-page)
## Warning
Although AKALlama-70B has significant potential, its responses can sometimes be inaccurate, biased, or misaligned, presenting risks if used without additional testing and refinement. Furthermore, the quality of the model's output is greatly influenced by the system prompt and decoding strategy. Changes in these areas could result in less precise outputs. Therefore, we strongly recommend handling our model with considerable caution.
## Citation
If you find our model useful in your work, please cite the model as follows:
@misc{akallama,
author = {Chung, Jiwan and Jeon, Jaehyun and Kim, Saejin and Lim, Seungwon and Oh, Giyeong and Son, Yejin and Yu, Youngjae},
title = {AkaLlama: Yonsei University Large Language Model Project},
year = {2024},
publisher = {Hugging Face},
journal = {Hugging Face repository},
howpublished = {\url{https://huggingface.co/mirlab/AkaLlama-llama3-70b-v0.1}},
}
## Contact
We look forward for your feedback and welcome collaboration on this exciting project!
### Contributors
- [YoungJaeYu](https://yj-yu.github.io/home/)
- [Yonsei MIRLab](https://mirlab.yonsei.ac.kr/)
## Special Thanks
- Data Center of the Department of Artificial Intelligence at Yonsei University for the computation resources
## Acknowledgement
- Title image generated by DALL·E 3 |
issaccyj/lora-sdxl-neonggraffitti | issaccyj | 2024-05-19T05:15:11Z | 4 | 1 | diffusers | [
"diffusers",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2024-05-19T04:08:09Z |
---
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
- template:sd-lora
widget:
- text: 'a cat in szn style'
output:
url:
"image_0.png"
- text: 'a cat in szn style'
output:
url:
"image_1.png"
- text: 'a cat in szn style'
output:
url:
"image_2.png"
- text: 'a cat in szn style'
output:
url:
"image_3.png"
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a rocket in szn style
license: openrail++
---
# SDXL LoRA DreamBooth - issaccyj/lora-sdxl-neonggraffitti
<Gallery />
## Model description
These are issaccyj/lora-sdxl-neonggraffitti LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: None.
## Trigger words
You should use a rocket in szn style to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](issaccyj/lora-sdxl-neonggraffitti/tree/main) them in the Files & versions tab.
|
Fatihseidaa/my-absa-base | Fatihseidaa | 2024-05-19T05:13:08Z | 0 | 0 | null | [
"text-classification",
"arxiv:1910.09700",
"license:apache-2.0",
"region:us"
] | text-classification | 2024-05-11T08:20:53Z | ---
license: apache-2.0
pipeline_tag: text-classification
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
coolguyleo/results-50 | coolguyleo | 2024-05-19T05:09:12Z | 1 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"region:us"
] | null | 2024-05-19T05:09:04Z | ---
license: llama3
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: meta-llama/Meta-Llama-3-8B-Instruct
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 50
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1 |
ninyx/Mistral-7B-Instruct-v0.2-advisegpt-v0.3 | ninyx | 2024-05-19T05:08:57Z | 0 | 0 | peft | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.2",
"license:apache-2.0",
"region:us"
] | null | 2024-05-17T06:32:05Z | ---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: mistralai/Mistral-7B-Instruct-v0.2
datasets:
- generator
metrics:
- bleu
- rouge
model-index:
- name: Mistral-7B-Instruct-v0.2-advisegpt-v0.3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Mistral-7B-Instruct-v0.2-advisegpt-v0.3
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0862
- Bleu: {'bleu': 0.9549627224896852, 'precisions': [0.9768137794223292, 0.9601226611596732, 0.9485784293167555, 0.9390826620297074], 'brevity_penalty': 0.9988666836798081, 'length_ratio': 0.998867325397811, 'translation_length': 1126143, 'reference_length': 1127420}
- Rouge: {'rouge1': 0.9750644838957752, 'rouge2': 0.9567876902653232, 'rougeL': 0.9732849754530062, 'rougeLsum': 0.9746665365645586}
- Exact Match: {'exact_match': 0.0}
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 3
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 10
- total_train_batch_size: 30
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Bleu | Exact Match | Validation Loss | Rouge |
|:-------------:|:------:|:----:|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:--------------------:|:---------------:|:---------------------------------------------------------------------------------------------------------------------------:|
| 0.0708 | 0.9991 | 907 | {'bleu': 0.9446206529600942, 'brevity_penalty': 0.9986864062670457, 'length_ratio': 0.9986872682762413, 'precisions': [0.9714789420395403, 0.9503978305171663, 0.9365333504686326, 0.9256591913728304], 'reference_length': 1127420, 'translation_length': 1125940} | {'exact_match': 0.0} | 0.1052 | {'rouge1': 0.9694819646914797, 'rouge2': 0.9464199252414252, 'rougeL': 0.9665470510722093, 'rougeLsum': 0.9687792447488508} |
| 0.0611 | 1.9991 | 1814 | 0.0878 | {'bleu': 0.9535151066703249, 'precisions': [0.9762399786139381, 0.9589412451791418, 0.9470130412549163, 0.9372328452904729], 'brevity_penalty': 0.9987103859226171, 'length_ratio': 0.998711216760391, 'translation_length': 1125967, 'reference_length': 1127420}| {'rouge1': 0.9743797099363829, 'rouge2': 0.9554568193403455, 'rougeL': 0.9724812167922234, 'rougeLsum': 0.9739500654981077}| {'exact_match': 0.0} |
| 0.051 | 2.9982 | 2721 | 0.0862 | {'bleu': 0.9549627224896852, 'precisions': [0.9768137794223292, 0.9601226611596732, 0.9485784293167555, 0.9390826620297074], 'brevity_penalty': 0.9988666836798081, 'length_ratio': 0.998867325397811, 'translation_length': 1126143, 'reference_length': 1127420}| {'rouge1': 0.9750644838957752, 'rouge2': 0.9567876902653232, 'rougeL': 0.9732849754530062, 'rougeLsum': 0.9746665365645586}| {'exact_match': 0.0} |
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.2
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1 |
apwic/sentiment-lora-r8a0d0.05-1 | apwic | 2024-05-19T04:52:57Z | 0 | 0 | null | [
"tensorboard",
"generated_from_trainer",
"id",
"base_model:indolem/indobert-base-uncased",
"base_model:finetune:indolem/indobert-base-uncased",
"license:mit",
"region:us"
] | null | 2024-05-19T04:19:49Z | ---
language:
- id
license: mit
base_model: indolem/indobert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: sentiment-lora-r8a0d0.05-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sentiment-lora-r8a0d0.05-1
This model is a fine-tuned version of [indolem/indobert-base-uncased](https://huggingface.co/indolem/indobert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3148
- Accuracy: 0.8697
- Precision: 0.8474
- Recall: 0.8328
- F1: 0.8395
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 30
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.5657 | 1.0 | 122 | 0.5161 | 0.7243 | 0.6616 | 0.6474 | 0.6529 |
| 0.5088 | 2.0 | 244 | 0.4913 | 0.7393 | 0.6917 | 0.7056 | 0.6971 |
| 0.4682 | 3.0 | 366 | 0.4424 | 0.7845 | 0.7401 | 0.7425 | 0.7413 |
| 0.4114 | 4.0 | 488 | 0.3980 | 0.8095 | 0.7702 | 0.7702 | 0.7702 |
| 0.3862 | 5.0 | 610 | 0.3890 | 0.8145 | 0.7783 | 0.8088 | 0.7889 |
| 0.3512 | 6.0 | 732 | 0.3583 | 0.8496 | 0.8245 | 0.8036 | 0.8128 |
| 0.3428 | 7.0 | 854 | 0.3496 | 0.8521 | 0.8207 | 0.8254 | 0.8229 |
| 0.3254 | 8.0 | 976 | 0.3425 | 0.8496 | 0.8245 | 0.8036 | 0.8128 |
| 0.3226 | 9.0 | 1098 | 0.3388 | 0.8571 | 0.8310 | 0.8189 | 0.8245 |
| 0.3063 | 10.0 | 1220 | 0.3376 | 0.8647 | 0.8439 | 0.8217 | 0.8315 |
| 0.2939 | 11.0 | 1342 | 0.3319 | 0.8672 | 0.8463 | 0.8260 | 0.8351 |
| 0.2838 | 12.0 | 1464 | 0.3323 | 0.8546 | 0.8263 | 0.8196 | 0.8229 |
| 0.2916 | 13.0 | 1586 | 0.3283 | 0.8647 | 0.8472 | 0.8167 | 0.8296 |
| 0.2826 | 14.0 | 1708 | 0.3244 | 0.8672 | 0.8463 | 0.8260 | 0.8351 |
| 0.2739 | 15.0 | 1830 | 0.3231 | 0.8697 | 0.8449 | 0.8378 | 0.8412 |
| 0.2674 | 16.0 | 1952 | 0.3221 | 0.8697 | 0.8449 | 0.8378 | 0.8412 |
| 0.2648 | 17.0 | 2074 | 0.3193 | 0.8722 | 0.8528 | 0.8321 | 0.8413 |
| 0.2687 | 18.0 | 2196 | 0.3172 | 0.8697 | 0.8460 | 0.8353 | 0.8404 |
| 0.264 | 19.0 | 2318 | 0.3170 | 0.8747 | 0.8552 | 0.8363 | 0.8448 |
| 0.2637 | 20.0 | 2440 | 0.3148 | 0.8697 | 0.8474 | 0.8328 | 0.8395 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.15.2
|
coolguyleo/results-20 | coolguyleo | 2024-05-19T04:51:04Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"region:us"
] | null | 2024-05-19T04:50:51Z | ---
license: llama3
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: meta-llama/Meta-Llama-3-8B-Instruct
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 20
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1 |
Jiahuixu/occt5 | Jiahuixu | 2024-05-19T04:27:10Z | 52 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"t5",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2024-05-02T05:09:57Z | ---
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {t5-occ}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 5130 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 100,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: T5EncoderModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Dense({'in_features': 1024, 'out_features': 768, 'bias': False, 'activation_function': 'torch.nn.modules.linear.Identity'})
(3): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
asiansoul/llama-3-Korean-Bllossom-120B-GGUF | asiansoul | 2024-05-19T04:22:58Z | 0 | 0 | transformers | [
"transformers",
"mergekit",
"merge",
"base_model:Bllossom/llama-3-Korean-Bllossom-70B",
"base_model:finetune:Bllossom/llama-3-Korean-Bllossom-70B",
"endpoints_compatible",
"region:us"
] | null | 2024-05-18T23:56:01Z | ---
base_model:
- Bllossom/llama-3-Korean-Bllossom-70B
library_name: transformers
tags:
- mergekit
- merge
---
🌋🌋 Huggingface Upload Issue
Maximum individual file size is 50.0GB to upload huggingface.
To clear it, Split the file into part_aa, part_ab, part_ac chunks as my "Practical Idea".
After you download this repo on your folder path, command like this.
Download from Huggingface (change your download path, in this case "./")
```
huggingface-cli download asiansoul/llama-3-Korean-Bllossom-120B-GGUF --local-dir='./'
```
Merge split files into one gguf file (in this case, run this on "./" download path)
```
cat part_* > llama-3-korean-bllossom-120b-Q4_K_M.gguf
```
I thought uploading it as a GGUF rather than a simple original file was for your benefit, so I'm uploading it like this even if it takes a bit of trouble.
```
Perhaps this will be the first GGUF model to upload such a large GGUF file of over 50GB to huggingface?
Other 120B model for the individual file size is under 50GB, That is why it can be uploaded.
Sometimes we need to use a trick called chunks.
```
Please wait to upload.....
### 🇰🇷 About the JayLee "AsianSoul"
```
"A leader who can make you rich 💵 !!!"
"Prove yourself with actual results, not just saying I know more than you!!!"
```
<a href="https://ibb.co/4g2SJVM"><img src="https://i.ibb.co/PzMWt64/Screenshot-2024-05-18-at-11-08-12-PM.png" alt="Screenshot-2024-05-18-at-11-08-12-PM" border="0"></a>
### About this model storytelling
This is a 128B model based on [Bllossom/llama-3-Korean-Bllossom-70B](https://huggingface.co/Bllossom/llama-3-Korean-Bllossom-70B)
☕ I started this Korean 120B model merge while drinking an iced Americano at Starbucks referring to other [Cognitive Computations 120B](https://huggingface.co/cognitivecomputations/MegaDolphin-120b).
If you walk around Starbucks in Seoul, Korea, you will see someone creating a merge and an application based on it.
At that time, please come up to me and say "hello".
"Also, if you want to create the Application project you want and provide me with support, I will create the entire architecture for you whatever it is."
🏎️ I am a person whose goal is to turn the great results created by great genius scientists & groups around the world into profitable ones.
```
My role model is J. Robert Oppenheimer!!!
J. Robert Oppenheimer is highly regarded for his ability to gather and lead a team of brilliant scientists, merging their diverse expertise and efforts towards a common goal.
```
[Learn more about J. Robert Oppenheimer](https://en.wikipedia.org/wiki/J._Robert_Oppenheimer).
I hope this 120B is a helpful model for your future.
```
🌍 Collaboration is always welcome 🌍
👊 You can't beat these giant corporations & groups alone and you can never become rich.
Now we have to come together.
People who can actually become rich together, let's collaborate with me.!!! 🍸
```
```
About Bllossom/llama-3-Korean-Bllossom-70B
- Full model released in Korean over 100GB by Blossom team
- First in Korean! Expansion of Korean vocabulary to over 30,000 words
- Capable of processing Korean context that is approximately 25% longer than Llama3
- Connecting Korean-English knowledge using the Korean-English Parallel Corpus (pre-study)
- Fine tuning using data produced by linguists considering Korean culture and language
- Reinforcement learning
🛰️ About asiansoul/llama-3-Korean-Bllossom-120B-GGUF
- Q4_K_M : On a GPU with 68GB / more OR a CPU with 68G / more memory
- More Quantization ones i hope to upload, but your computer won't be able to handle it then. you know what i mean!!
```
### Models Merged
The following models were included in the merge:
* [Bllossom/llama-3-Korean-Bllossom-70B](https://huggingface.co/Bllossom/llama-3-Korean-Bllossom-70B)
### Ollama
Check the information indicated above and run it when your computer is ready.
🥶 Otherwise, your computer will freeze.
* Create
```
ollama create Bllossom -f ./Modelfile_Q4_K_M
```
* MODELFILE (you can change this for your preference)
```
FROM ./llama-3-korean-bllossom-120b-Q4_K_M.gguf
TEMPLATE """{{ if .System }}<|start_header_id|>system<|end_header_id|>
{{ .System }}<|eot_id|>{{ end }}{{ if .Prompt }}<|start_header_id|>user<|end_header_id|>
{{ .Prompt }}<|eot_id|>{{ end }}<|start_header_id|>assistant<|end_header_id|>
{{ .Response }}<|eot_id|>"""
SYSTEM """
당신은 유용한 AI 어시스턴트입니다. 사용자의 질의에 대해 친절하고 정확하게 답변해야 합니다.
You are a helpful AI assistant, you'll need to answer users' queries in a friendly and accurate manner.
"""
PARAMETER num_ctx 1024
PARAMETER num_keep 24
PARAMETER temperature 0.6
PARAMETER top_p 0.9
PARAMETER num_predict 2048
PARAMETER num_thread 20
PARAMETER stop "<|start_header_id|>"
PARAMETER stop "<|end_header_id|>"
PARAMETER stop "<|eot_id|>"
```
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- layer_range: [0, 20]
model: Bllossom/llama-3-Korean-Bllossom-70B
- sources:
- layer_range: [10, 30]
model: Bllossom/llama-3-Korean-Bllossom-70B
- sources:
- layer_range: [20, 40]
model: Bllossom/llama-3-Korean-Bllossom-70B
- sources:
- layer_range: [30, 50]
model: Bllossom/llama-3-Korean-Bllossom-70B
- sources:
- layer_range: [40, 60]
model: Bllossom/llama-3-Korean-Bllossom-70B
- sources:
- layer_range: [50, 70]
model: Bllossom/llama-3-Korean-Bllossom-70B
- sources:
- layer_range: [60, 80]
model: Bllossom/llama-3-Korean-Bllossom-70B
merge_method: passthrough
dtype: float16
``` |
lots-o/ko-albert-large-v1 | lots-o | 2024-05-19T04:11:07Z | 119 | 3 | transformers | [
"transformers",
"pytorch",
"albert",
"fill-mask",
"ko",
"arxiv:1909.11942",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2024-05-18T03:42:02Z | ---
license: apache-2.0
language:
- ko
---
# Korean ALBERT
# Dataset
- [AI-HUB](https://www.aihub.or.kr/)
- [국립국어원 - 모두의 말뭉치](https://kli.korean.go.kr/corpus/main/requestMain.do?lang=ko)
- [Korean News Comments](https://www.kaggle.com/junbumlee/kcbert-pretraining-corpus-korean-news-comments)
# Evaluation results
- The code for finetuning can be found at [KcBERT-Finetune](https://github.com/Beomi/KcBERT-finetune).
| | Size(용량) | Average Score | **NSMC**<br/>(acc) | **Naver NER**<br/>(F1) | **PAWS**<br/>(acc) | **KorNLI**<br/>(acc) | **KorSTS**<br/>(spearman) | **Question Pair**<br/>(acc) | **KorQuaD (Dev)**<br/>(EM/F1) |
|:---------------------- |:----------:|:-------------:|:------------------:|:----------------------:|:------------------:|:--------------------:|:-------------------------:|:---------------------------:|:-----------------------------:|
| KcELECTRA-base | 475M | 84.84 | 91.71 | 86.90 | 74.80 | 81.65 | 82.65 | **95.78** | 70.60 / 90.11 |
| KcELECTRA-base-v2022 | 475M | 85.20 | **91.97** | **87.35** | 76.50 | **82.12** | **83.67** | 95.12 | 69.00 / 90.40 |
| KcBERT-Base | 417M | 79.65 | 89.62 | 84.34 | 66.95 | 74.85 | 75.57 | 93.93 | 60.25 / 84.39 |
| KcBERT-Large | 1.2G | 81.33 | 90.68 | 85.53 | 70.15 | 76.99 | 77.49 | 94.06 | 62.16 / 86.64 |
| KoBERT | 351M | 82.21 | 89.63 | 86.11 | 80.65 | 79.00 | 79.64 | 93.93 | 52.81 / 80.27 |
| XLM-Roberta-Base | 1.03G | 84.01 | 89.49 | 86.26 | 82.95 | 79.92 | 79.09 | 93.53 | 64.70 / 88.94 |
| HanBERT | 614M | 86.24 | 90.16 | 87.31 | 82.40 | 80.89 | 83.33 | 94.19 | 78.74 / 92.02 |
| KoELECTRA-Base | 423M | 84.66 | 90.21 | 86.87 | 81.90 | 80.85 | 83.21 | 94.20 | 61.10 / 89.59 |
| KoELECTRA-Base-v2 | 423M | **86.96** | 89.70 | 87.02 | **83.90** | 80.61 | 84.30 | 94.72 | **84.34 / 92.58** |
| DistilKoBERT | 108M | 76.76 | 88.41 | 84.13 | 62.55 | 70.55 | 73.21 | 92.48 | 54.12 / 77.80 |
| **ko-albert-base-v1** | **51M** | 80.46 | 86.83 | 82.26 | 69.95 | 74.17 | 74.48 | 94.06 | 76.08 / 86.82 |
| **ko-albert-large-v1** | **75M** | 82.39 | 86.91 | 83.12 | 76.10 | 76.01 | 77.46 | 94.33 | 77.64 / 87.99 |
*The size of HanBERT is the sum of the BERT model and the tokenizer DB.
*These results were obtained using the default configuration settings. Better performance may be achieved with additional hyperparameter tuning.
# How to use
```python
from transformers import AutoTokenizer, AutoModel
# Base Model (51M)
tokenizer = AutoTokenizer.from_pretrained("lots-o/ko-albert-base-v1")
model = AutoModel.from_pretrained("lots-o/ko-albert-base-v1")
# Large Model (75M)
tokenizer = AutoTokenizer.from_pretrained("lots-o/ko-albert-large-v1")
model = AutoModel.from_pretrained("lots-o/ko-albert-large-v1")
```
# Acknowledgement
- The GCP/TPU environment used for training the ALBERT Model was supported by the [TRC](https://sites.research.google/trc/about/) program.
# Reference
## Paper
- [ALBERT: A Lite BERT for Self-supervised Learning of Language Representations](https://arxiv.org/abs/1909.11942)
## Github Repos
- [google-albert](https://github.com/google-research/albert)
- [albert-zh](https://github.com/brightmart/albert_zh)
- [KcBERT](https://github.com/Beomi/KcBERT)
- [KcBERT-Finetune](https://github.com/Beomi/KcBERT-finetune) |
ubaada/pegasus-x-large-booksum-16k | ubaada | 2024-05-19T04:03:34Z | 11 | 0 | transformers | [
"transformers",
"safetensors",
"pegasus_x",
"text2text-generation",
"generated_from_trainer",
"base_model:ubaada/pegasus-x-large-booksum-16k",
"base_model:finetune:ubaada/pegasus-x-large-booksum-16k",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-05-15T04:34:12Z | ---
base_model: ubaada/pegasus-x-large-booksum-16k
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: pegasus-x-large-booksum-16k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/theubaada/huggingface/runs/fvqvjsw6)
# pegasus-x-large-booksum-16k
This model is a fine-tuned version of [ubaada/pegasus-x-large-booksum-16k](https://huggingface.co/ubaada/pegasus-x-large-booksum-16k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8948
- Rouge1: 0.3044
- Rouge2: 0.0517
- Rougel: 0.1398
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8e-05
- train_batch_size: 8
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel |
|:-------------:|:------:|:----:|:---------------:|:------:|:------:|:------:|
| 1.3001 | 0.9992 | 314 | 1.8948 | 0.3044 | 0.0517 | 0.1398 |
### Framework versions
- Transformers 4.41.0
- Pytorch 2.2.0
- Datasets 2.19.1
- Tokenizers 0.19.1
|
chitb/starcoder_v2_non_null | chitb | 2024-05-19T04:00:32Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2024-05-19T04:00:31Z | ---
license: apache-2.0
---
|
STomoya/caformer_m36.st_safebooru_1k | STomoya | 2024-05-19T04:00:32Z | 15 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"license:apache-2.0",
"region:us"
] | image-classification | 2024-05-19T04:00:11Z | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
---
# Model card for caformer_m36.st_safebooru_1k
## Model Details
- **metrics:**
|Precision|Recall|F1-score|
|-|-|-|
|0.8046851514745909|0.5213825354450625|0.6103918807791574|
|
HaileyStorm/chess-mamba-vs-xformer | HaileyStorm | 2024-05-19T03:56:18Z | 0 | 2 | null | [
"license:mit",
"region:us"
] | null | 2024-03-12T06:59:54Z | ---
license: mit
---
For an explanation of this project and the models trained for it, please see the [Report](Report/REPORT.md).
The root folder contains scripts for dataset preprocessing.
[chess-mamba-vs-xformer](../../tree/main/chess-mamba-vs-xformer/) contains the training scripts.
Config files, used to set model configuration and training hyperameters, are in [chess-mamba-vs-xformer/config](../../tree/main/chess-mamba-vs-xformer/config).
Model checkpoints are in [chess-mamba-vs-xformer/out](../../tree/main/chess-mamba-vs-xformer/out). The last checkpoint for completed models (e.g. Mamba and Transformer 50M) are .../anneal/anneal_complete.pt.
[chess-gpt-eval](../../tree/main/chess-gpt-eval/) has the scripts for model evaluation - playings games against Stockfish or lc0 chess engines. The logs folder contains raw evaluation metrics.
[chess-gpt-eval-contrastive](../../tree/main/chess-gpt-eval-contrastive/) likewise has the scripts for model evaluation, but modified for training and evaluation of contrastive activation and linear probes. The logs folder again contains raw evaluation metrics. |
gargabhi/shortrecipes20M | gargabhi | 2024-05-19T03:42:57Z | 131 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-19T03:41:53Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
jacoboggleon/llama-3-8b-bnb-4bit-SFT | jacoboggleon | 2024-05-19T03:35:08Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-19T03:34:57Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Uploaded model
- **Developed by:** jacoboggleon
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Dhahlan2000/gemma-Instruct-Finetune-SingGPT | Dhahlan2000 | 2024-05-19T03:35:01Z | 131 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-19T03:28:47Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
fzzhang/mistralv1_dora_r32_25e5_e05 | fzzhang | 2024-05-19T03:25:27Z | 1 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"region:us"
] | null | 2024-05-19T03:25:22Z | ---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
base_model: mistralai/Mistral-7B-v0.1
model-index:
- name: mistralv1_dora_r32_25e5_e05
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistralv1_dora_r32_25e5_e05
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2.5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.38.2
- Pytorch 2.2.1
- Datasets 2.18.0
- Tokenizers 0.15.2 |
houbw/llama3_3 | houbw | 2024-05-19T03:25:01Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-Instruct-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-Instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-19T02:42:42Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-3-8b-Instruct-bnb-4bit
---
# Uploaded model
- **Developed by:** houbw
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
RichardErkhov/PracticeLLM_-_SOLAR-tail-10.7B-Merge-v1.0-4bits | RichardErkhov | 2024-05-19T03:09:54Z | 77 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-05-19T03:05:12Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
SOLAR-tail-10.7B-Merge-v1.0 - bnb 4bits
- Model creator: https://huggingface.co/PracticeLLM/
- Original model: https://huggingface.co/PracticeLLM/SOLAR-tail-10.7B-Merge-v1.0/
Original model description:
---
language:
- en
- ko
license: cc-by-nc-sa-4.0
pipeline_tag: text-generation
model-index:
- name: SOLAR-tail-10.7B-Merge-v1.0
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 66.13
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=PracticeLLM/SOLAR-tail-10.7B-Merge-v1.0
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 86.54
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=PracticeLLM/SOLAR-tail-10.7B-Merge-v1.0
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 66.52
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=PracticeLLM/SOLAR-tail-10.7B-Merge-v1.0
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 60.57
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=PracticeLLM/SOLAR-tail-10.7B-Merge-v1.0
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 84.77
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=PracticeLLM/SOLAR-tail-10.7B-Merge-v1.0
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 65.58
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=PracticeLLM/SOLAR-tail-10.7B-Merge-v1.0
name: Open LLM Leaderboard
---
# **SOLAR-tail-10.7B-Merge-v1.0**
## Model Details
**Model Developers** Kyujin Han (kyujinpy)
**Method**
Using [Mergekit](https://github.com/cg123/mergekit).
- [upstage/SOLAR-10.7B-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-v1.0)
- [Yhyu13/LMCocktail-10.7B-v1](Yhyu13/LMCocktail-10.7B-v1)
**Merge config**
```
slices:
- sources:
- model: upstage/SOLAR-10.7B-v1.0
layer_range: [0, 48]
- model: Yhyu13/LMCocktail-10.7B-v1
layer_range: [0, 48]
merge_method: slerp
base_model: upstage/SOLAR-10.7B-v1.0
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5 # fallback for rest of tensors
tokenizer_source: union
dtype: float16
```
# **Model Benchmark**
## Open Ko leaderboard
- Follow up as [Ko-link](https://huggingface.co/spaces/upstage/open-ko-llm-leaderboard).
| Model | Average | ARC | HellaSwag | MMLU | TruthfulQA | Ko-CommonGenV2 |
| --- | --- | --- | --- | --- | --- | --- |
| PracticeLLM/SOLAR-tail-10.7B-Merge-v1.0 | 48.32 | 45.73 | 56.97 | 38.77 | 38.75 | 61.16 |
| jjourney1125/M-SOLAR-10.7B-v1.0 | 55.15 | 49.57 | 60.12 | 54.60 | 49.23 | 62.22 |
- Follow up as [En-link](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
| Model | Average | ARC | HellaSwag | MMLU | TruthfulQA | Winogrande | GSM8K |
| --- | --- | --- | --- | --- | --- | --- | --- |
| PracticeLLM/SOLAR-tail-10.7B-Merge-v1.0 | 71.68 | 66.13 | 86.54 | **66.52** | 60.57 | **84.77** | **65.58** |
| kyujinpy/Sakura-SOLAR-Instruct | **74.40** | **70.99** | **88.42** | 66.33 | **71.79** | 83.66 | 65.20 |
## lm-evaluation-harness
```
gpt2 (pretrained=PracticeLLM/SOLAR-tail-10.7B-Merge-v1.0), limit: None, provide_description: False, num_fewshot: 0, batch_size: None
| Task |Version| Metric |Value | |Stderr|
|----------------|------:|--------|-----:|---|-----:|
|kobest_boolq | 0|acc |0.5021|± |0.0133|
| | |macro_f1|0.3343|± |0.0059|
|kobest_copa | 0|acc |0.6220|± |0.0153|
| | |macro_f1|0.6217|± |0.0154|
|kobest_hellaswag| 0|acc |0.4380|± |0.0222|
| | |acc_norm|0.5380|± |0.0223|
| | |macro_f1|0.4366|± |0.0222|
|kobest_sentineg | 0|acc |0.4962|± |0.0251|
| | |macro_f1|0.3316|± |0.0113|
```
# Implementation Code
```python
### KO-Platypus
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
repo = "PracticeLLM/SOLAR-tail-10.7B-Merge-v1.0"
OpenOrca = AutoModelForCausalLM.from_pretrained(
repo,
return_dict=True,
torch_dtype=torch.float16,
device_map='auto'
)
OpenOrca_tokenizer = AutoTokenizer.from_pretrained(repo)
```
---
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_PracticeLLM__SOLAR-tail-10.7B-Merge-v1.0)
| Metric |Value|
|---------------------------------|----:|
|Avg. |71.68|
|AI2 Reasoning Challenge (25-Shot)|66.13|
|HellaSwag (10-Shot) |86.54|
|MMLU (5-Shot) |66.52|
|TruthfulQA (0-shot) |60.57|
|Winogrande (5-shot) |84.77|
|GSM8k (5-shot) |65.58|
|
RichardErkhov/malhajar_-_Mistral-7B-v0.2-meditron-turkish-gguf | RichardErkhov | 2024-05-19T02:58:30Z | 34 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-05-19T01:30:34Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Mistral-7B-v0.2-meditron-turkish - GGUF
- Model creator: https://huggingface.co/malhajar/
- Original model: https://huggingface.co/malhajar/Mistral-7B-v0.2-meditron-turkish/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Mistral-7B-v0.2-meditron-turkish.Q2_K.gguf](https://huggingface.co/RichardErkhov/malhajar_-_Mistral-7B-v0.2-meditron-turkish-gguf/blob/main/Mistral-7B-v0.2-meditron-turkish.Q2_K.gguf) | Q2_K | 2.53GB |
| [Mistral-7B-v0.2-meditron-turkish.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/malhajar_-_Mistral-7B-v0.2-meditron-turkish-gguf/blob/main/Mistral-7B-v0.2-meditron-turkish.IQ3_XS.gguf) | IQ3_XS | 2.81GB |
| [Mistral-7B-v0.2-meditron-turkish.IQ3_S.gguf](https://huggingface.co/RichardErkhov/malhajar_-_Mistral-7B-v0.2-meditron-turkish-gguf/blob/main/Mistral-7B-v0.2-meditron-turkish.IQ3_S.gguf) | IQ3_S | 2.96GB |
| [Mistral-7B-v0.2-meditron-turkish.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/malhajar_-_Mistral-7B-v0.2-meditron-turkish-gguf/blob/main/Mistral-7B-v0.2-meditron-turkish.Q3_K_S.gguf) | Q3_K_S | 2.95GB |
| [Mistral-7B-v0.2-meditron-turkish.IQ3_M.gguf](https://huggingface.co/RichardErkhov/malhajar_-_Mistral-7B-v0.2-meditron-turkish-gguf/blob/main/Mistral-7B-v0.2-meditron-turkish.IQ3_M.gguf) | IQ3_M | 3.06GB |
| [Mistral-7B-v0.2-meditron-turkish.Q3_K.gguf](https://huggingface.co/RichardErkhov/malhajar_-_Mistral-7B-v0.2-meditron-turkish-gguf/blob/main/Mistral-7B-v0.2-meditron-turkish.Q3_K.gguf) | Q3_K | 3.28GB |
| [Mistral-7B-v0.2-meditron-turkish.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/malhajar_-_Mistral-7B-v0.2-meditron-turkish-gguf/blob/main/Mistral-7B-v0.2-meditron-turkish.Q3_K_M.gguf) | Q3_K_M | 3.28GB |
| [Mistral-7B-v0.2-meditron-turkish.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/malhajar_-_Mistral-7B-v0.2-meditron-turkish-gguf/blob/main/Mistral-7B-v0.2-meditron-turkish.Q3_K_L.gguf) | Q3_K_L | 3.56GB |
| [Mistral-7B-v0.2-meditron-turkish.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/malhajar_-_Mistral-7B-v0.2-meditron-turkish-gguf/blob/main/Mistral-7B-v0.2-meditron-turkish.IQ4_XS.gguf) | IQ4_XS | 3.67GB |
| [Mistral-7B-v0.2-meditron-turkish.Q4_0.gguf](https://huggingface.co/RichardErkhov/malhajar_-_Mistral-7B-v0.2-meditron-turkish-gguf/blob/main/Mistral-7B-v0.2-meditron-turkish.Q4_0.gguf) | Q4_0 | 3.83GB |
| [Mistral-7B-v0.2-meditron-turkish.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/malhajar_-_Mistral-7B-v0.2-meditron-turkish-gguf/blob/main/Mistral-7B-v0.2-meditron-turkish.IQ4_NL.gguf) | IQ4_NL | 3.87GB |
| [Mistral-7B-v0.2-meditron-turkish.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/malhajar_-_Mistral-7B-v0.2-meditron-turkish-gguf/blob/main/Mistral-7B-v0.2-meditron-turkish.Q4_K_S.gguf) | Q4_K_S | 3.86GB |
| [Mistral-7B-v0.2-meditron-turkish.Q4_K.gguf](https://huggingface.co/RichardErkhov/malhajar_-_Mistral-7B-v0.2-meditron-turkish-gguf/blob/main/Mistral-7B-v0.2-meditron-turkish.Q4_K.gguf) | Q4_K | 4.07GB |
| [Mistral-7B-v0.2-meditron-turkish.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/malhajar_-_Mistral-7B-v0.2-meditron-turkish-gguf/blob/main/Mistral-7B-v0.2-meditron-turkish.Q4_K_M.gguf) | Q4_K_M | 4.07GB |
| [Mistral-7B-v0.2-meditron-turkish.Q4_1.gguf](https://huggingface.co/RichardErkhov/malhajar_-_Mistral-7B-v0.2-meditron-turkish-gguf/blob/main/Mistral-7B-v0.2-meditron-turkish.Q4_1.gguf) | Q4_1 | 4.24GB |
| [Mistral-7B-v0.2-meditron-turkish.Q5_0.gguf](https://huggingface.co/RichardErkhov/malhajar_-_Mistral-7B-v0.2-meditron-turkish-gguf/blob/main/Mistral-7B-v0.2-meditron-turkish.Q5_0.gguf) | Q5_0 | 4.65GB |
| [Mistral-7B-v0.2-meditron-turkish.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/malhajar_-_Mistral-7B-v0.2-meditron-turkish-gguf/blob/main/Mistral-7B-v0.2-meditron-turkish.Q5_K_S.gguf) | Q5_K_S | 4.65GB |
| [Mistral-7B-v0.2-meditron-turkish.Q5_K.gguf](https://huggingface.co/RichardErkhov/malhajar_-_Mistral-7B-v0.2-meditron-turkish-gguf/blob/main/Mistral-7B-v0.2-meditron-turkish.Q5_K.gguf) | Q5_K | 4.78GB |
| [Mistral-7B-v0.2-meditron-turkish.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/malhajar_-_Mistral-7B-v0.2-meditron-turkish-gguf/blob/main/Mistral-7B-v0.2-meditron-turkish.Q5_K_M.gguf) | Q5_K_M | 4.78GB |
| [Mistral-7B-v0.2-meditron-turkish.Q5_1.gguf](https://huggingface.co/RichardErkhov/malhajar_-_Mistral-7B-v0.2-meditron-turkish-gguf/blob/main/Mistral-7B-v0.2-meditron-turkish.Q5_1.gguf) | Q5_1 | 5.07GB |
| [Mistral-7B-v0.2-meditron-turkish.Q6_K.gguf](https://huggingface.co/RichardErkhov/malhajar_-_Mistral-7B-v0.2-meditron-turkish-gguf/blob/main/Mistral-7B-v0.2-meditron-turkish.Q6_K.gguf) | Q6_K | 5.53GB |
| [Mistral-7B-v0.2-meditron-turkish.Q8_0.gguf](https://huggingface.co/RichardErkhov/malhajar_-_Mistral-7B-v0.2-meditron-turkish-gguf/blob/main/Mistral-7B-v0.2-meditron-turkish.Q8_0.gguf) | Q8_0 | 7.17GB |
Original model description:
---
language:
- tr
- en
license: apache-2.0
datasets:
- malhajar/meditron-tr
model-index:
- name: Mistral-7B-v0.2-meditron-turkish
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 59.56
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=malhajar/Mistral-7B-v0.2-meditron-turkish
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 81.79
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=malhajar/Mistral-7B-v0.2-meditron-turkish
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 60.35
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=malhajar/Mistral-7B-v0.2-meditron-turkish
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 66.19
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=malhajar/Mistral-7B-v0.2-meditron-turkish
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 76.24
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=malhajar/Mistral-7B-v0.2-meditron-turkish
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 35.94
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=malhajar/Mistral-7B-v0.2-meditron-turkish
name: Open LLM Leaderboard
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
Mistral-7B-v0.2-meditron-turkish is a finetuned Mistral Model version using Freeze technique on Turkish Meditron dataset of [`malhajar/meditron-7b-tr`](https://huggingface.co/datasets/malhajar/meditron-tr) using SFT Training.
This model can answer information about different excplicit ideas in medicine in Turkish and English
### Model Description
- **Finetuned by:** [`Mohamad Alhajar`](https://www.linkedin.com/in/muhammet-alhajar/)
- **Language(s) (NLP):** Turkish,English
- **Finetuned from model:** [`mistralai/Mistral-7B-Instruct-v0.2`](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2)
### Prompt Template For Turkish Generation
```
### Kullancı:
```
### Prompt Template For English Generation
```
### User:
```
## How to Get Started with the Model
Use the code sample provided in the original post to interact with the model.
```python
from transformers import AutoTokenizer,AutoModelForCausalLM
model_id = "malhajar/Mistral-7B-v0.2-meditron-turkish"
model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
device_map="auto",
torch_dtype=torch.float16,
revision="main")
tokenizer = AutoTokenizer.from_pretrained(model_id)
question: "Akciğer kanseri nedir?"
# For generating a response
prompt = '''
### Kullancı:
{question}
'''
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
output = model.generate(inputs=input_ids,max_new_tokens=512,pad_token_id=tokenizer.eos_token_id,top_k=50, do_sample=True,
top_p=0.95)
response = tokenizer.decode(output[0])
print(response)
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_malhajar__Mistral-7B-v0.2-meditron-turkish)
| Metric |Value|
|---------------------------------|----:|
|Avg. |63.34|
|AI2 Reasoning Challenge (25-Shot)|59.56|
|HellaSwag (10-Shot) |81.79|
|MMLU (5-Shot) |60.35|
|TruthfulQA (0-shot) |66.19|
|Winogrande (5-shot) |76.24|
|GSM8k (5-shot) |35.94|
|
jstotz64/distilbert-base-uncased-finetuned-squad | jstotz64 | 2024-05-19T02:56:30Z | 25 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"question-answering",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"endpoints_compatible",
"region:us"
] | question-answering | 2024-05-17T11:55:47Z | ---
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4237
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.2116 | 1.0 | 8235 | 1.2919 |
| 0.9222 | 2.0 | 16470 | 1.2244 |
| 0.753 | 3.0 | 24705 | 1.4237 |
### Framework versions
- Transformers 4.38.1
- Pytorch 2.2.1
- Datasets 2.17.1
- Tokenizers 0.15.2
|
chenged118/dragonImageClassification | chenged118 | 2024-05-19T02:56:16Z | 250 | 0 | transformers | [
"transformers",
"safetensors",
"vit",
"image-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2024-05-18T09:52:20Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Rhma/LlamaMigr10 | Rhma | 2024-05-19T02:45:26Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-19T02:41:54Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
netcat420/MFANN3bv0.6 | netcat420 | 2024-05-19T02:37:32Z | 20 | 0 | transformers | [
"transformers",
"safetensors",
"phi",
"text-generation",
"text-classification",
"dataset:netcat420/MFANN",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-02T04:17:23Z | ---
library_name: transformers
license: apache-2.0
datasets:
- netcat420/MFANN
pipeline_tag: text-classification
---
MFANN 3b version 0.6

fine-tuned on the MFANN dataset as it stands on 5/2/2024 as it is an ever changing and expaning dataset.
benchmark results for this 3b model:
64.34 <-- Average
62.63 <-- Arc
77.1 <-- HellaSwag
58.43 <-- MMLU
51.71 <-- TruthfulQA
74.66 <-- Winogrande
61.49 <-- GSM8K
currently the worlds best 2.78B parameter model!!!!!!!!!!! as of 5/2/2024 |
MinhViet/bartpho-linear-test2 | MinhViet | 2024-05-19T02:36:59Z | 177 | 0 | transformers | [
"transformers",
"safetensors",
"mbart",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-05-19T02:36:04Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
zcamz/bert-finetuned-toxic | zcamz | 2024-05-19T02:26:35Z | 110 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-18T17:45:47Z | ---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
metrics:
- f1
- accuracy
model-index:
- name: bert-finetuned-toxic
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-toxic
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3207
- F1: 0.7032
- Roc Auc: 0.9143
- Accuracy: 0.9069
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:|
| No log | 1.0 | 499 | 0.1740 | 0.5646 | 0.9544 | 0.8619 |
| 0.2962 | 2.0 | 998 | 0.1595 | 0.5994 | 0.9551 | 0.8691 |
| 0.1545 | 3.0 | 1497 | 0.1715 | 0.6322 | 0.9509 | 0.8776 |
| 0.1218 | 4.0 | 1996 | 0.1883 | 0.6412 | 0.9467 | 0.8870 |
| 0.0976 | 5.0 | 2495 | 0.2497 | 0.6808 | 0.9265 | 0.9037 |
| 0.0807 | 6.0 | 2994 | 0.2411 | 0.6742 | 0.9331 | 0.8917 |
| 0.0682 | 7.0 | 3493 | 0.2955 | 0.6922 | 0.9183 | 0.8995 |
| 0.0597 | 8.0 | 3992 | 0.3207 | 0.7032 | 0.9143 | 0.9069 |
| 0.0533 | 9.0 | 4491 | 0.3207 | 0.6977 | 0.9158 | 0.9044 |
| 0.0487 | 10.0 | 4990 | 0.3407 | 0.7028 | 0.9091 | 0.9073 |
### Framework versions
- Transformers 4.40.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
Sorour/llama3_cls_fomc | Sorour | 2024-05-19T02:20:08Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-19T02:14:34Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
forgetUserName/gemma-2b-DPO | forgetUserName | 2024-05-19T02:12:17Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-19T00:46:44Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
AliSaadatV/virus_pythia_160_1024_2d_representation_MSEPlusCE | AliSaadatV | 2024-05-19T02:09:57Z | 129 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"generated_from_trainer",
"base_model:EleutherAI/pythia-160m",
"base_model:finetune:EleutherAI/pythia-160m",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-19T02:09:43Z | ---
license: apache-2.0
base_model: EleutherAI/pythia-160m
tags:
- generated_from_trainer
model-index:
- name: virus_pythia_160_1024_2d_representation_MSEPlusCE
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# virus_pythia_160_1024_2d_representation_MSEPlusCE
This model is a fine-tuned version of [EleutherAI/pythia-160m](https://huggingface.co/EleutherAI/pythia-160m) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.40.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
liminerity/mm4.more.star.gguf | liminerity | 2024-05-19T02:04:08Z | 3 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:liminerity/mm4.ascii.star",
"base_model:quantized:liminerity/mm4.ascii.star",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-05-19T02:01:12Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
base_model: liminerity/mm4.ascii.star
---
# Uploaded model
- **Developed by:** liminerity
- **License:** apache-2.0
- **Finetuned from model :** liminerity/mm4.ascii.star
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
act45688/AI | act45688 | 2024-05-19T01:51:28Z | 0 | 0 | null | [
"license:openrail++",
"region:us"
] | null | 2024-05-19T01:50:58Z | ---
license: openrail++
---
|
ahmedgongi/Llama_dev3tokenizer_finale4 | ahmedgongi | 2024-05-19T01:35:09Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-19T01:35:07Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
apwic/sentiment-lora-r4a1d0.05-1 | apwic | 2024-05-19T01:32:10Z | 0 | 0 | null | [
"tensorboard",
"generated_from_trainer",
"id",
"base_model:indolem/indobert-base-uncased",
"base_model:finetune:indolem/indobert-base-uncased",
"license:mit",
"region:us"
] | null | 2024-05-19T00:59:00Z | ---
language:
- id
license: mit
base_model: indolem/indobert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: sentiment-lora-r4a1d0.05-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sentiment-lora-r4a1d0.05-1
This model is a fine-tuned version of [indolem/indobert-base-uncased](https://huggingface.co/indolem/indobert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3356
- Accuracy: 0.8622
- Precision: 0.8399
- Recall: 0.8200
- F1: 0.8289
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 30
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.5657 | 1.0 | 122 | 0.5182 | 0.7243 | 0.6604 | 0.6424 | 0.6488 |
| 0.5109 | 2.0 | 244 | 0.5051 | 0.7243 | 0.6748 | 0.6874 | 0.6796 |
| 0.48 | 3.0 | 366 | 0.4643 | 0.7569 | 0.7047 | 0.6880 | 0.6948 |
| 0.434 | 4.0 | 488 | 0.4281 | 0.7920 | 0.7497 | 0.7378 | 0.7431 |
| 0.4106 | 5.0 | 610 | 0.4194 | 0.7920 | 0.7528 | 0.7778 | 0.7618 |
| 0.3812 | 6.0 | 732 | 0.3936 | 0.8296 | 0.8008 | 0.7744 | 0.7854 |
| 0.3689 | 7.0 | 854 | 0.3700 | 0.8521 | 0.8220 | 0.8204 | 0.8212 |
| 0.3489 | 8.0 | 976 | 0.3656 | 0.8346 | 0.8088 | 0.7780 | 0.7905 |
| 0.3502 | 9.0 | 1098 | 0.3640 | 0.8371 | 0.8101 | 0.7847 | 0.7955 |
| 0.3349 | 10.0 | 1220 | 0.3608 | 0.8346 | 0.8074 | 0.7805 | 0.7917 |
| 0.3189 | 11.0 | 1342 | 0.3574 | 0.8396 | 0.8128 | 0.7890 | 0.7992 |
| 0.3121 | 12.0 | 1464 | 0.3547 | 0.8471 | 0.8175 | 0.8093 | 0.8132 |
| 0.3181 | 13.0 | 1586 | 0.3478 | 0.8521 | 0.8332 | 0.7979 | 0.8122 |
| 0.3092 | 14.0 | 1708 | 0.3435 | 0.8596 | 0.8374 | 0.8157 | 0.8253 |
| 0.3018 | 15.0 | 1830 | 0.3466 | 0.8546 | 0.8296 | 0.8121 | 0.8200 |
| 0.2955 | 16.0 | 1952 | 0.3365 | 0.8596 | 0.8347 | 0.8207 | 0.8272 |
| 0.2917 | 17.0 | 2074 | 0.3353 | 0.8596 | 0.8374 | 0.8157 | 0.8253 |
| 0.2956 | 18.0 | 2196 | 0.3379 | 0.8596 | 0.8360 | 0.8182 | 0.8262 |
| 0.2899 | 19.0 | 2318 | 0.3353 | 0.8647 | 0.8455 | 0.8192 | 0.8306 |
| 0.2885 | 20.0 | 2440 | 0.3356 | 0.8622 | 0.8399 | 0.8200 | 0.8289 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.15.2
|
fzzhang/mistralv1_lora_r32_25e5_e05_merged | fzzhang | 2024-05-19T01:30:06Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-19T01:27:17Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
fzzhang/mistralv1_lora_r32_25e5_e05 | fzzhang | 2024-05-19T01:27:04Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"region:us"
] | null | 2024-05-19T01:26:59Z | ---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
base_model: mistralai/Mistral-7B-v0.1
model-index:
- name: mistralv1_lora_r32_25e5_e05
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistralv1_lora_r32_25e5_e05
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2.5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.38.2
- Pytorch 2.2.1
- Datasets 2.18.0
- Tokenizers 0.15.2 |
TinyPixel/dnb-5 | TinyPixel | 2024-05-19T01:24:39Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-19T01:24:15Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Jebadiah/lama-3-wild-stone-p1 | Jebadiah | 2024-05-19T01:24:25Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"custom_code",
"arxiv:2403.19522",
"base_model:ChaoticNeutrals/Puppy_Purpose_0.69",
"base_model:merge:ChaoticNeutrals/Puppy_Purpose_0.69",
"base_model:Jebadiah/Aria-dolphin-1m-sand-stone",
"base_model:merge:Jebadiah/Aria-dolphin-1m-sand-stone",
"base_model:NeverSleep/Llama-3-Lumimaid-8B-v0.1",
"base_model:merge:NeverSleep/Llama-3-Lumimaid-8B-v0.1",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-19T01:22:23Z | ---
base_model:
- ChaoticNeutrals/Puppy_Purpose_0.69
- Jebadiah/Aria-dolphin-1m-sand-stone
- NeverSleep/Llama-3-Lumimaid-8B-v0.1
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [Jebadiah/Aria-dolphin-1m-sand-stone](https://huggingface.co/Jebadiah/Aria-dolphin-1m-sand-stone) as a base.
### Models Merged
The following models were included in the merge:
* [ChaoticNeutrals/Puppy_Purpose_0.69](https://huggingface.co/ChaoticNeutrals/Puppy_Purpose_0.69)
* [NeverSleep/Llama-3-Lumimaid-8B-v0.1](https://huggingface.co/NeverSleep/Llama-3-Lumimaid-8B-v0.1)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: NeverSleep/Llama-3-Lumimaid-8B-v0.1
- model: ChaoticNeutrals/Puppy_Purpose_0.69
merge_method: model_stock
base_model: Jebadiah/Aria-dolphin-1m-sand-stone
dtype: float16
```
|
yyx123/Yi-6B-ruozhiba-5e-4-50 | yyx123 | 2024-05-19T01:20:29Z | 12 | 0 | peft | [
"peft",
"safetensors",
"llama",
"alignment-handbook",
"generated_from_trainer",
"trl",
"sft",
"dataset:ruozhiba",
"base_model:01-ai/Yi-6B",
"base_model:adapter:01-ai/Yi-6B",
"license:other",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2024-05-10T16:40:45Z | ---
license: other
library_name: peft
tags:
- alignment-handbook
- generated_from_trainer
- trl
- sft
- generated_from_trainer
datasets:
- ruozhiba
base_model: 01-ai/Yi-6B
model-index:
- name: Yi-6B-ruozhiba-5e-4-50
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Yi-6B-ruozhiba-5e-4-50
This model is a fine-tuned version of [01-ai/Yi-6B](https://huggingface.co/01-ai/Yi-6B) on the ruozhiba dataset.
It achieves the following results on the evaluation set:
- Loss: 3.4887
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.5872 | 2.0 | 110 | 2.0369 |
| 0.9686 | 3.0 | 165 | 2.4604 |
| 0.534 | 4.0 | 220 | 2.9133 |
| 0.2198 | 5.0 | 275 | 3.0297 |
| 0.1399 | 6.0 | 330 | 3.1912 |
| 0.1165 | 7.0 | 385 | 3.3320 |
| 0.0972 | 9.0 | 495 | 3.4838 |
| 0.1639 | 10.0 | 550 | 3.4887 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.3.0+cu121
- Datasets 2.14.6
- Tokenizers 0.15.2 |
RichardErkhov/mlabonne_-_NeuralMarcoro14-7B-gguf | RichardErkhov | 2024-05-19T01:12:49Z | 14 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-05-18T23:07:58Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
NeuralMarcoro14-7B - GGUF
- Model creator: https://huggingface.co/mlabonne/
- Original model: https://huggingface.co/mlabonne/NeuralMarcoro14-7B/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [NeuralMarcoro14-7B.Q2_K.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_NeuralMarcoro14-7B-gguf/blob/main/NeuralMarcoro14-7B.Q2_K.gguf) | Q2_K | 2.53GB |
| [NeuralMarcoro14-7B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_NeuralMarcoro14-7B-gguf/blob/main/NeuralMarcoro14-7B.IQ3_XS.gguf) | IQ3_XS | 2.81GB |
| [NeuralMarcoro14-7B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_NeuralMarcoro14-7B-gguf/blob/main/NeuralMarcoro14-7B.IQ3_S.gguf) | IQ3_S | 2.96GB |
| [NeuralMarcoro14-7B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_NeuralMarcoro14-7B-gguf/blob/main/NeuralMarcoro14-7B.Q3_K_S.gguf) | Q3_K_S | 2.95GB |
| [NeuralMarcoro14-7B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_NeuralMarcoro14-7B-gguf/blob/main/NeuralMarcoro14-7B.IQ3_M.gguf) | IQ3_M | 3.06GB |
| [NeuralMarcoro14-7B.Q3_K.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_NeuralMarcoro14-7B-gguf/blob/main/NeuralMarcoro14-7B.Q3_K.gguf) | Q3_K | 3.28GB |
| [NeuralMarcoro14-7B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_NeuralMarcoro14-7B-gguf/blob/main/NeuralMarcoro14-7B.Q3_K_M.gguf) | Q3_K_M | 3.28GB |
| [NeuralMarcoro14-7B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_NeuralMarcoro14-7B-gguf/blob/main/NeuralMarcoro14-7B.Q3_K_L.gguf) | Q3_K_L | 3.56GB |
| [NeuralMarcoro14-7B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_NeuralMarcoro14-7B-gguf/blob/main/NeuralMarcoro14-7B.IQ4_XS.gguf) | IQ4_XS | 3.67GB |
| [NeuralMarcoro14-7B.Q4_0.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_NeuralMarcoro14-7B-gguf/blob/main/NeuralMarcoro14-7B.Q4_0.gguf) | Q4_0 | 3.83GB |
| [NeuralMarcoro14-7B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_NeuralMarcoro14-7B-gguf/blob/main/NeuralMarcoro14-7B.IQ4_NL.gguf) | IQ4_NL | 3.87GB |
| [NeuralMarcoro14-7B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_NeuralMarcoro14-7B-gguf/blob/main/NeuralMarcoro14-7B.Q4_K_S.gguf) | Q4_K_S | 3.86GB |
| [NeuralMarcoro14-7B.Q4_K.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_NeuralMarcoro14-7B-gguf/blob/main/NeuralMarcoro14-7B.Q4_K.gguf) | Q4_K | 4.07GB |
| [NeuralMarcoro14-7B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_NeuralMarcoro14-7B-gguf/blob/main/NeuralMarcoro14-7B.Q4_K_M.gguf) | Q4_K_M | 4.07GB |
| [NeuralMarcoro14-7B.Q4_1.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_NeuralMarcoro14-7B-gguf/blob/main/NeuralMarcoro14-7B.Q4_1.gguf) | Q4_1 | 4.24GB |
| [NeuralMarcoro14-7B.Q5_0.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_NeuralMarcoro14-7B-gguf/blob/main/NeuralMarcoro14-7B.Q5_0.gguf) | Q5_0 | 4.65GB |
| [NeuralMarcoro14-7B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_NeuralMarcoro14-7B-gguf/blob/main/NeuralMarcoro14-7B.Q5_K_S.gguf) | Q5_K_S | 4.65GB |
| [NeuralMarcoro14-7B.Q5_K.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_NeuralMarcoro14-7B-gguf/blob/main/NeuralMarcoro14-7B.Q5_K.gguf) | Q5_K | 4.78GB |
| [NeuralMarcoro14-7B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_NeuralMarcoro14-7B-gguf/blob/main/NeuralMarcoro14-7B.Q5_K_M.gguf) | Q5_K_M | 4.78GB |
| [NeuralMarcoro14-7B.Q5_1.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_NeuralMarcoro14-7B-gguf/blob/main/NeuralMarcoro14-7B.Q5_1.gguf) | Q5_1 | 5.07GB |
| [NeuralMarcoro14-7B.Q6_K.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_NeuralMarcoro14-7B-gguf/blob/main/NeuralMarcoro14-7B.Q6_K.gguf) | Q6_K | 5.53GB |
| [NeuralMarcoro14-7B.Q8_0.gguf](https://huggingface.co/RichardErkhov/mlabonne_-_NeuralMarcoro14-7B-gguf/blob/main/NeuralMarcoro14-7B.Q8_0.gguf) | Q8_0 | 7.17GB |
Original model description:
---
license: cc-by-nc-4.0
tags:
- mlabonne/Marcoro14-7B-slerp
- dpo
- rlhf
- merge
- mergekit
- lazymergekit
datasets:
- mlabonne/chatml_dpo_pairs
base_model: mlabonne/Marcoro14-7B-slerp
model-index:
- name: NeuralMarcoro14-7B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 71.42
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/NeuralMarcoro14-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 87.59
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/NeuralMarcoro14-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.84
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/NeuralMarcoro14-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 65.64
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/NeuralMarcoro14-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 81.22
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/NeuralMarcoro14-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 70.74
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/NeuralMarcoro14-7B
name: Open LLM Leaderboard
---

# NeuralMarcoro14-7B
This is a DPO fine-tuned version of [mlabonne/Marcoro14-7B-slerp](https://huggingface.co/mlabonne/Marcoro14-7B-slerp) using the [chatml_dpo_pairs](https://huggingface.co/datasets/mlabonne/chatml_dpo_pairs) preference dataset.
It improves the performance of the model on Nous benchmark suite and the Open LLM Benchmark.
It is currently the best-performing 7B LLM on the Open LLM Leaderboard (08/01/24).
You can try it out in this [Space](https://huggingface.co/spaces/mlabonne/NeuralMarcoro14-7B-GGUF-Chat) (GGUF Q4_K_M).
## ⚡ Quantized models
* **GGUF**: https://huggingface.co/mlabonne/NeuralMarcoro14-7B-GGUF
## 🏆 Evaluation
### Open LLM Leaderboard


### Nous
| Model |AGIEval|GPT4ALL|TruthfulQA|Bigbench|Average|
|-------------------------|------:|------:|---------:|-------:|------:|
|[NeuralMarcoro14-7B](https://huggingface.co/mlabonne/NeuralMarcoro14-7B)| 44.59| 76.17| 65.94| 46.9| 58.4|
|[Marcoro14-7B-slerp](https://huggingface.co/mlabonne/Marcoro14-7B-slerp) | 44.66| 76.24| 64.15| 45.64| 57.67|
|Change | -0.07| -0.07| +1.79| +1.26| +0.73|
## 🧩 Training hyperparameters
**LoRA**:
* r=16
* lora_alpha=16
* lora_dropout=0.05
* bias="none"
* task_type="CAUSAL_LM"
* target_modules=['k_proj', 'gate_proj', 'v_proj', 'up_proj', 'q_proj', 'o_proj', 'down_proj']
**Training arguments**:
* per_device_train_batch_size=4
* gradient_accumulation_steps=4
* gradient_checkpointing=True
* learning_rate=5e-5
* lr_scheduler_type="cosine"
* max_steps=200
* optim="paged_adamw_32bit"
* warmup_steps=100
**DPOTrainer**:
* beta=0.1
* max_prompt_length=1024
* max_length=1536
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "mlabonne/NeuralMarcoro14-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
EpicJhon/l3-4 | EpicJhon | 2024-05-19T01:01:42Z | 16 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-03T09:10:18Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
dexter-chan/distilbert-base-uncased-yelp | dexter-chan | 2024-05-19T00:50:08Z | 199 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-19T00:45:17Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8210
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 20
### Training results
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2
- Datasets 2.16.1
- Tokenizers 0.15.2
|
AndyNodi/llama-3-8b-Instruct-bnb-4bit-aiaustin-demo | AndyNodi | 2024-05-19T00:42:24Z | 4 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/llama-3-8b-Instruct-bnb-4bit",
"base_model:quantized:unsloth/llama-3-8b-Instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-05-19T00:38:25Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
base_model: unsloth/llama-3-8b-Instruct-bnb-4bit
---
# Uploaded model
- **Developed by:** AndyNodi
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
stablediffusionapi/divineelegancemix-v10 | stablediffusionapi | 2024-05-19T00:41:23Z | 29 | 0 | diffusers | [
"diffusers",
"modelslab.com",
"stable-diffusion-api",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2024-05-19T00:39:13Z | ---
license: creativeml-openrail-m
tags:
- modelslab.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
---
# DivineEleganceMix v10 API Inference

## Get API Key
Get API key from [ModelsLab API](http://modelslab.com), No Payment needed.
Replace Key in below code, change **model_id** to "divineelegancemix-v10"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://docs.modelslab.com)
Try model for free: [Generate Images](https://modelslab.com/models/divineelegancemix-v10)
Model link: [View model](https://modelslab.com/models/divineelegancemix-v10)
View all models: [View Models](https://modelslab.com/models)
import requests
import json
url = "https://modelslab.com/api/v6/images/text2img"
payload = json.dumps({
"key": "your_api_key",
"model_id": "divineelegancemix-v10",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "embeddings_model_id",
"lora": "lora_model_id",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
> Use this coupon code to get 25% off **DMGG0RBN** |
Zlovoblachko/en_ouroboros2 | Zlovoblachko | 2024-05-19T00:34:27Z | 3 | 0 | spacy | [
"spacy",
"token-classification",
"en",
"region:us"
] | token-classification | 2024-05-18T22:05:33Z | ---
tags:
- spacy
language:
- en
model-index:
- name: en_ouroboros2
results: []
pipeline_tag: token-classification
---
| Feature | Description |
| --- | --- |
| **Name** | `en_ouroboros2` |
| **Version** | `0.0.0` |
| **spaCy** | `>=3.4.4,<3.5.0` |
| **Default Pipeline** | `transformer`, `spancat` |
| **Components** | `transformer`, `spancat` |
| **Vectors** | 0 keys, 0 unique vectors (0 dimensions) |
| **Sources** | n/a |
| **License** | n/a |
| **Author** | [n/a]() |
### Label Scheme
<details>
<summary>View label scheme (5 labels for 1 components)</summary>
| Component | Labels |
| --- | --- |
| **`spancat`** | `Tense semantics`, `Synonyms`, `Copying expression`, `Word form transmission`, `Transliteration` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `SPANS_SC_F` | 85.27 |
| `SPANS_SC_P` | 89.75 |
| `SPANS_SC_R` | 81.22 |
| `TRANSFORMER_LOSS` | 3379.13 |
| `SPANCAT_LOSS` | 108805.61 | |
stablediffusionapi/icbinp | stablediffusionapi | 2024-05-19T00:24:38Z | 29 | 0 | diffusers | [
"diffusers",
"modelslab.com",
"stable-diffusion-api",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2024-05-19T00:22:41Z | ---
license: creativeml-openrail-m
tags:
- modelslab.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
---
# ICBINP API Inference

## Get API Key
Get API key from [ModelsLab API](http://modelslab.com), No Payment needed.
Replace Key in below code, change **model_id** to "icbinp"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://docs.modelslab.com)
Try model for free: [Generate Images](https://modelslab.com/models/icbinp)
Model link: [View model](https://modelslab.com/models/icbinp)
View all models: [View Models](https://modelslab.com/models)
import requests
import json
url = "https://modelslab.com/api/v6/images/text2img"
payload = json.dumps({
"key": "your_api_key",
"model_id": "icbinp",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "embeddings_model_id",
"lora": "lora_model_id",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
> Use this coupon code to get 25% off **DMGG0RBN** |
fine-tuned/jina-embeddings-v2-base-en-19052024-oiu8-webapp | fine-tuned | 2024-05-19T00:22:07Z | 6 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"mteb",
"E-commerce",
"Advertising",
"Online merchants",
"Multichannel campaigns",
"Community",
"custom_code",
"en",
"dataset:fine-tuned/jina-embeddings-v2-base-en-19052024-oiu8-webapp",
"dataset:allenai/c4",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2024-05-19T00:21:52Z | ---
license: apache-2.0
datasets:
- fine-tuned/jina-embeddings-v2-base-en-19052024-oiu8-webapp
- allenai/c4
language:
- en
pipeline_tag: feature-extraction
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- mteb
- E-commerce
- Advertising
- Online merchants
- Multichannel campaigns
- Community
---
This model is a fine-tuned version of [**jinaai/jina-embeddings-v2-base-en**](https://huggingface.co/jinaai/jina-embeddings-v2-base-en) designed for the following use case:
E-commerce advertising platform
## How to Use
This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started:
```python
from sentence_transformers import SentenceTransformer
from sentence_transformers.util import cos_sim
model = SentenceTransformer(
'fine-tuned/jina-embeddings-v2-base-en-19052024-oiu8-webapp',
trust_remote_code=True
)
embeddings = model.encode([
'first text to embed',
'second text to embed'
])
print(cos_sim(embeddings[0], embeddings[1]))
```
|
Zhengping/roberta-large-unli | Zhengping | 2024-05-19T00:18:25Z | 331 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"en",
"dataset:Zhengping/UNLI",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-04-29T00:28:49Z | ---
datasets:
- Zhengping/UNLI
language:
- en
pipeline_tag: text-classification
---
UNLI model fine-tuned from `ynie/roberta-large-snli_mnli_fever_anli_R1_R2_R3-nli`, using UNLI. If you find this model useful, please cite the paper:
```
@inproceedings{chen-etal-2020-uncertain,
title = "Uncertain Natural Language Inference",
author = "Chen, Tongfei and
Jiang, Zhengping and
Poliak, Adam and
Sakaguchi, Keisuke and
Van Durme, Benjamin",
editor = "Jurafsky, Dan and
Chai, Joyce and
Schluter, Natalie and
Tetreault, Joel",
booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
month = jul,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.acl-main.774",
doi = "10.18653/v1/2020.acl-main.774",
pages = "8772--8779",
abstract = "We introduce Uncertain Natural Language Inference (UNLI), a refinement of Natural Language Inference (NLI) that shifts away from categorical labels, targeting instead the direct prediction of subjective probability assessments. We demonstrate the feasibility of collecting annotations for UNLI by relabeling a portion of the SNLI dataset under a probabilistic scale, where items even with the same categorical label differ in how likely people judge them to be true given a premise. We describe a direct scalar regression modeling approach, and find that existing categorically-labeled NLI data can be used in pre-training. Our best models correlate well with humans, demonstrating models are capable of more subtle inferences than the categorical bin assignment employed in current NLI tasks.",
}
``` |
irfanfadhullah/winagent-8b-Instruct-bnb-8bit | irfanfadhullah | 2024-05-19T00:10:33Z | 7 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:quantized:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-19T00:08:58Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Uploaded model
- **Developed by:** irfanfadhullah
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mrm8488/llama-3-8b-ft-en-es-rag-gguf-q8_0 | mrm8488 | 2024-05-19T00:03:04Z | 10 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:quantized:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-18T23:57:00Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Uploaded model
- **Developed by:** mrm8488
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
joudissa/results | joudissa | 2024-05-19T00:01:21Z | 0 | 0 | peft | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"region:us"
] | null | 2024-05-19T00:00:54Z | ---
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: mistralai/Mistral-7B-v0.1
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/joud-issa/mistral2/runs/0jfxn99b)
# results
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5641
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_steps: 100
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.399 | 1.0 | 4174 | 0.5641 |
### Framework versions
- PEFT 0.11.2.dev0
- Transformers 4.41.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1 |
abc88767/2c101 | abc88767 | 2024-05-18T23:58:58Z | 93 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-18T05:12:32Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ViraIntelligentDataMining/PersianLLaMA-13B | ViraIntelligentDataMining | 2024-05-18T23:57:14Z | 184 | 7 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"fa",
"arxiv:2312.15713",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"region:us"
] | text-generation | 2024-05-16T23:17:30Z | ---
license: cc-by-nc-4.0
language:
- fa
library_name: transformers
tags:
- text-generation-inference
inference: false
pipeline_tag: text-generation
---
# PersianLLaMA: Towards Building First Persian Large Language Model
<img src="https://huggingface.co/ViraIntelligentDataMining/PersianLLaMA-2-13B/resolve/main/persianllama.png" alt="PersianLLaMA" width=400/>
## 🌟 Introduction
Welcome to the home of PersianLLaMA, the pioneering large language model for the Persian language. With 13 billion parameters, this model is trained on Persian Wikipedia corpus and designed to excel in multiple NLP tasks, setting a new benchmark for Persian language understanding and generation.
## 🛠 Model Description
PersianLLaMA is not just a model but a comprehensive tool for:
- 📝 **Text Generation**: Crafting coherent and contextually appropriate text.
- 🎯 **Instruct Tuning**: Executing tasks based on detailed instructions, ideal for scenarios where the model needs to adhere to specific guidelines or produce outputs tailored to particular requirements.
- ❓ **Question Answering**: Providing accurate answers to Persian queries.
- 📊 **Text Summarization**: Condensing Persian texts into precise summaries.
This model has been collaboratively developed by a team of experts, including Mohammad Amin Abbasi, Arash Ghafouri, Mahdi Firouzmandi, Hassan Naderi, Behrouz Minaei Bidgoli.
## 🚀 Quick Start
To integrate PersianLLaMA into your project, follow these steps:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "ViraIntelligentDataMining/PersianLLaMA-13B"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
prompt = "این متن به فارسی است"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(inputs["input_ids"])
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
## 📈 Evaluation and Benchmarks
PersianLLaMA demonstrates superior performance over existing models, with robust evaluation metrics that highlight its capabilities in natural language understanding and generation.
## 📜 Citing PersianLLaMA
If you find PersianLLaMA useful in your research, please consider citing:
```bibtex
@article{abbasi2023persianllama,
title={PersianLLaMA: Towards Building First Persian Large Language Model},
author={Abbasi, Mohammad Amin and others},
journal={https://arxiv.org/abs/2312.15713},
year={2023}
}
```
## 📄 License
PersianLLaMA is open-sourced under the CC BY-NC 4.0 license. |
yyx123/Yi-6B-ruozhiba-5e-5-50 | yyx123 | 2024-05-18T23:56:23Z | 4 | 0 | peft | [
"peft",
"safetensors",
"llama",
"alignment-handbook",
"generated_from_trainer",
"trl",
"sft",
"dataset:ruozhiba",
"base_model:01-ai/Yi-6B",
"base_model:adapter:01-ai/Yi-6B",
"license:other",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2024-05-10T16:40:17Z | ---
license: other
library_name: peft
tags:
- alignment-handbook
- generated_from_trainer
- trl
- sft
- generated_from_trainer
datasets:
- ruozhiba
base_model: 01-ai/Yi-6B
model-index:
- name: Yi-6B-ruozhiba-5e-5-50
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Yi-6B-ruozhiba-5e-5-50
This model is a fine-tuned version of [01-ai/Yi-6B](https://huggingface.co/01-ai/Yi-6B) on the ruozhiba dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0795
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.8473 | 2.0 | 110 | 1.9843 |
| 1.7645 | 3.0 | 165 | 1.9680 |
| 1.4795 | 4.0 | 220 | 2.0760 |
| 1.2467 | 5.0 | 275 | 2.2715 |
| 1.0034 | 6.0 | 330 | 2.5656 |
| 0.8124 | 7.0 | 385 | 2.8052 |
| 0.6269 | 8.0 | 440 | 2.9866 |
| 0.5743 | 9.0 | 495 | 3.0649 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.3.0+cu121
- Datasets 2.14.6
- Tokenizers 0.15.2 |
ViraIntelligentDataMining/PersianLLaMA-13B-Instruct | ViraIntelligentDataMining | 2024-05-18T23:56:15Z | 55 | 6 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"fa",
"dataset:sinarashidi/alpaca-persian",
"arxiv:2312.15713",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"region:us"
] | text-generation | 2024-05-18T19:16:47Z | ---
license: cc-by-nc-4.0
language:
- fa
library_name: transformers
tags:
- text-generation-inference
inference: false
pipeline_tag: text-generation
datasets:
- sinarashidi/alpaca-persian
---
# PersianLLaMA: Towards Building First Persian Large Language Model
<img src="https://huggingface.co/ViraIntelligentDataMining/PersianLLaMA-2-13B/resolve/main/persianllama.png" alt="PersianLLaMA" width=400/>
## 🌟 Introduction
Welcome to the home of PersianLLaMA, a large language model specifically designed for instruct tasks in the Persian language. With 13 billion parameters, this model is fine-tuned using the Persian Alpaca dataset to excel at executing detailed instructions and delivering tailored outputs.
## 🛠 Model Description
The PersianLLaMA model is specifically designed for inference tasks, allowing it to execute detailed instructions and provide outputs tailored to specific requirements.
This model has been collaboratively developed by a team of experts, including Mohammad Amin Abbasi, Arash Ghafouri, Mahdi Firouzmandi, Hassan Naderi, Behrouz Minaei Bidgoli.
## 🚀 Quick Start
To integrate PersianLLaMA into your project, follow these steps:
```python
import torch
from transformers import LlamaTokenizer, LlamaForCausalLM
prompt_input = (
"Below is an instruction that describes a task. "
"Write a response that appropriately completes the request.\n\n"
"### Instruction:\n\n{instruction}\n\n### Response:\n\n"
)
load_type = torch.float16
device = torch.device(0)
def generate_prompt(instruction, input=None):
if input:
instruction = instruction + '\n' + input
return prompt_input.format_map({'instruction': instruction})
model_path = "ViraIntelligentDataMining/PersianLLaMA-13B-Instruct"
tokenizer = LlamaTokenizer.from_pretrained(model_path)
base_model = LlamaForCausalLM.from_pretrained(
model_path,
torch_dtype=load_type,
device_map='auto',
).cuda()
model_vocab_size = base_model.get_input_embeddings().weight.size(0)
tokenizer_vocab_size = len(tokenizer)
if model_vocab_size != tokenizer_vocab_size:
base_model.resize_token_embeddings(tokenizer_vocab_size)
def generate_answer(base_model, instruction, input=None):
generation_config = dict(
temperature=0.5,
top_k=40,
top_p=0.9,
repetition_penalty=1.1,
max_new_tokens=1024)
input_text = generate_prompt(instruction, input)
inputs = tokenizer(input_text, return_tensors="pt")
generation_output = base_model.generate(
input_ids=inputs["input_ids"].to(device),
eos_token_id=tokenizer.eos_token_id,
pad_token_id=tokenizer.pad_token_id,
**generation_config)
s = generation_output[0]
output = tokenizer.decode(s, skip_special_tokens=True)
response = output.split("### Response:")[1].strip()
return response
instruction = "تصور کنید در حال نوشتن داستانی درباره یک شهر که تمام ساکنانش ربات هستند. این رباتها تواناییهای ویژهای دارند که زندگی روزمره آنها را از انسانها متمایز میکند. شرح دهید که این رباتها چگونه به کارهای روزانه خود میپردازند و چه چالشهایی با آنها روبهرو هستند. همچنین، توضیح دهید که چگونه معماری شهر برای نیازهای خاص آنها طراحی شده است."
response = generate_answer(base_model,
instruction=instruction,
input="")
print(response)
"""
در این شهر، همه ساکنان ربات هستند که دارای توانایی های منحصر به فرد هستند که زندگی روزمره آنها را از انسان ها متمایز می کند. هر روز صبح، ربات ها بیدار می شوند و برنامه های خود را برای روز تنظیم می کنند. برخی از آنها ممکن است برای کار بروند، در حالی که دیگران ممکن است برای انجام وظایف خانگی یا مراقبت از خانواده خود وقت صرف کنند. ربات ها بسیار کارآمد هستند و می توانند چندین کار را همزمان انجام دهند، بنابراین زمان زیادی برای استراحت ندارند. آنها همچنین به طور منظم برای نگهداری و تعمیر نیاز دارند، زیرا آنها مانند انسان ها مستعد خرابی هستند. بسیاری از ربات ها به دنبال سرگرمی هستند و ممکن است برای شرکت در فعالیت هایی مانند ورزش، خواندن یا نقاشی وقت صرف کنند. برخی از ربات ها حتی ممکن است برای یادگیری یا بهبود مهارت های خود در زمینه های مختلف مانند هنر، موسیقی یا آشپزی تلاش کنند. ربات ها همچنین به طور مداوم به پیشرفت های تکنولوژیکی جدید علاقه مند هستند و اغلب به دنبال راه حل های جدید برای مشکلات موجود در جامعه هستند. در این شهر، همه ساکنان ربات هستند که دارای توانایی های منحصر به فرد هستند که زندگی روزمره آنها را از انسان ها متمایز می کند.
"""
```
## 📈 Evaluation and Benchmarks
PersianLLaMA demonstrates superior performance over existing models, with robust evaluation metrics that highlight its capabilities in natural language understanding and generation.
## 📜 Citing PersianLLaMA
If you find PersianLLaMA useful in your research, please consider citing:
```bibtex
@article{abbasi2023persianllama,
title={PersianLLaMA: Towards Building First Persian Large Language Model},
author={Abbasi, Mohammad Amin and others},
journal={https://arxiv.org/abs/2312.15713},
year={2023}
}
```
## 📄 License
PersianLLaMA is open-sourced under the CC BY-NC 4.0 license. |
irfanfadhullah/winagent-8b-Instruct-bnb-16bit | irfanfadhullah | 2024-05-18T23:54:43Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-18T23:31:07Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Uploaded model
- **Developed by:** irfanfadhullah
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
ViraIntelligentDataMining/AriaBERT | ViraIntelligentDataMining | 2024-05-18T23:52:38Z | 162 | 5 | transformers | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"bert",
"persian",
"fa",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2023-12-27T18:33:39Z | ---
license: apache-2.0
language:
- fa
tags:
- bert
- roberta
- persian
---
# AriaBERT: A Pre-trained Persian BERT Model for Natural Language Understanding
## Introduction
AriaBERT represents a breakthrough in natural language processing (NLP) for the Persian language. Developed to address the critical gap in efficient pretrained language models for Persian, AriaBERT is tailored to elevate the standards of Persian language tasks.
## Paper: https://www.researchsquare.com/article/rs-3558473/v1
## Key Features
- **Diverse Training Data:** AriaBERT has been trained on over 32 gigabytes of varied Persian textual data, spanning conversational, formal, and hybrid texts. This includes a rich mix of tweets, news articles, poems, medical and encyclopedia texts, user opinions, and more.
- **RoBERTa Architecture:** Leveraging the robustness of the RoBERTa architecture and the precision of Byte-Pair Encoding tokenizer, AriaBERT stands apart from traditional BERT-based models.
- **Broad Applicability:** Ideal for a range of NLP tasks including classification, sentiment analysis, and stance detection, particularly in the Persian language context.
## Performance Benchmarks
- **Sentiment Analysis:** Achieves an average improvement of 3% over competing models.
- **Classification Tasks:** Demonstrates a 0.65% improvement in accuracy.
- **Stance Detection:** Shows a 3% enhancement in performance metrics.
|
mrm8488/llama-3-8b-ft-en-es-rag-4bit-merged | mrm8488 | 2024-05-18T23:52:29Z | 79 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:quantized:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-05-18T23:48:36Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Uploaded model
- **Developed by:** mrm8488
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
apwic/sentiment-lora-r4a0d0.05-1 | apwic | 2024-05-18T23:51:44Z | 0 | 0 | null | [
"tensorboard",
"generated_from_trainer",
"id",
"base_model:indolem/indobert-base-uncased",
"base_model:finetune:indolem/indobert-base-uncased",
"license:mit",
"region:us"
] | null | 2024-05-18T23:18:34Z | ---
language:
- id
license: mit
base_model: indolem/indobert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: sentiment-lora-r4a0d0.05-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sentiment-lora-r4a0d0.05-1
This model is a fine-tuned version of [indolem/indobert-base-uncased](https://huggingface.co/indolem/indobert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3356
- Accuracy: 0.8622
- Precision: 0.8399
- Recall: 0.8200
- F1: 0.8289
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 30
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.5657 | 1.0 | 122 | 0.5182 | 0.7243 | 0.6604 | 0.6424 | 0.6488 |
| 0.5109 | 2.0 | 244 | 0.5051 | 0.7243 | 0.6748 | 0.6874 | 0.6796 |
| 0.48 | 3.0 | 366 | 0.4643 | 0.7569 | 0.7047 | 0.6880 | 0.6948 |
| 0.434 | 4.0 | 488 | 0.4281 | 0.7920 | 0.7497 | 0.7378 | 0.7431 |
| 0.4106 | 5.0 | 610 | 0.4194 | 0.7920 | 0.7528 | 0.7778 | 0.7618 |
| 0.3812 | 6.0 | 732 | 0.3936 | 0.8296 | 0.8008 | 0.7744 | 0.7854 |
| 0.3689 | 7.0 | 854 | 0.3700 | 0.8521 | 0.8220 | 0.8204 | 0.8212 |
| 0.3489 | 8.0 | 976 | 0.3656 | 0.8346 | 0.8088 | 0.7780 | 0.7905 |
| 0.3502 | 9.0 | 1098 | 0.3640 | 0.8371 | 0.8101 | 0.7847 | 0.7955 |
| 0.3349 | 10.0 | 1220 | 0.3608 | 0.8346 | 0.8074 | 0.7805 | 0.7917 |
| 0.3189 | 11.0 | 1342 | 0.3574 | 0.8396 | 0.8128 | 0.7890 | 0.7992 |
| 0.3121 | 12.0 | 1464 | 0.3547 | 0.8471 | 0.8175 | 0.8093 | 0.8132 |
| 0.3181 | 13.0 | 1586 | 0.3478 | 0.8521 | 0.8332 | 0.7979 | 0.8122 |
| 0.3092 | 14.0 | 1708 | 0.3435 | 0.8596 | 0.8374 | 0.8157 | 0.8253 |
| 0.3018 | 15.0 | 1830 | 0.3466 | 0.8546 | 0.8296 | 0.8121 | 0.8200 |
| 0.2955 | 16.0 | 1952 | 0.3365 | 0.8596 | 0.8347 | 0.8207 | 0.8272 |
| 0.2917 | 17.0 | 2074 | 0.3353 | 0.8596 | 0.8374 | 0.8157 | 0.8253 |
| 0.2956 | 18.0 | 2196 | 0.3379 | 0.8596 | 0.8360 | 0.8182 | 0.8262 |
| 0.2899 | 19.0 | 2318 | 0.3353 | 0.8647 | 0.8455 | 0.8192 | 0.8306 |
| 0.2885 | 20.0 | 2440 | 0.3356 | 0.8622 | 0.8399 | 0.8200 | 0.8289 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.15.2
|
abc88767/22c101 | abc88767 | 2024-05-18T23:51:32Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-18T05:20:14Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ucla-nb-project/bart-adapter | ucla-nb-project | 2024-05-18T23:48:48Z | 0 | 0 | null | [
"generated_from_trainer",
"dataset:datasets/all_binary_and_xe_ey_fae_counterfactual",
"base_model:facebook/bart-base",
"base_model:finetune:facebook/bart-base",
"license:apache-2.0",
"model-index",
"region:us"
] | null | 2024-05-18T11:12:05Z | ---
license: apache-2.0
base_model: facebook/bart-base
tags:
- generated_from_trainer
datasets:
- datasets/all_binary_and_xe_ey_fae_counterfactual
metrics:
- accuracy
model-index:
- name: bart-adapter-finetuned-xe_ey_fae
results:
- task:
name: Masked Language Modeling
type: fill-mask
dataset:
name: datasets/all_binary_and_xe_ey_fae_counterfactual
type: datasets/all_binary_and_xe_ey_fae_counterfactual
metrics:
- name: Accuracy
type: accuracy
value: 0.3096946377787028
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-adapter-finetuned-xe_ey_fae
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the datasets/all_binary_and_xe_ey_fae_counterfactual dataset.
It achieves the following results on the evaluation set:
- Loss: 4.2302
- Accuracy: 0.3097
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 100
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 7.6974 | 0.06 | 500 | 6.7246 | 0.0649 |
| 6.8017 | 0.12 | 1000 | 6.4067 | 0.0762 |
| 6.5894 | 0.18 | 1500 | 6.2661 | 0.0821 |
| 6.443 | 0.24 | 2000 | 6.1350 | 0.0905 |
| 6.3245 | 0.3 | 2500 | 6.0024 | 0.1008 |
| 6.2208 | 0.35 | 3000 | 5.8518 | 0.1145 |
| 6.097 | 0.41 | 3500 | 5.6588 | 0.1330 |
| 5.9862 | 0.47 | 4000 | 5.4641 | 0.1543 |
| 5.8742 | 0.53 | 4500 | 5.3200 | 0.1707 |
| 5.7716 | 0.59 | 5000 | 5.2044 | 0.1840 |
| 5.6952 | 0.65 | 5500 | 5.1154 | 0.1952 |
| 5.6209 | 0.71 | 6000 | 5.0428 | 0.2044 |
| 5.5752 | 0.77 | 6500 | 4.9711 | 0.2136 |
| 5.5091 | 0.83 | 7000 | 4.9078 | 0.2212 |
| 5.4657 | 0.89 | 7500 | 4.8495 | 0.2287 |
| 5.4245 | 0.95 | 8000 | 4.8012 | 0.2360 |
| 5.3813 | 1.0 | 8500 | 4.7563 | 0.2409 |
| 5.3501 | 1.06 | 9000 | 4.7166 | 0.2464 |
| 5.3098 | 1.12 | 9500 | 4.6838 | 0.2501 |
| 5.2856 | 1.18 | 10000 | 4.6515 | 0.2551 |
| 5.2549 | 1.24 | 10500 | 4.6121 | 0.2602 |
| 5.2217 | 1.3 | 11000 | 4.5841 | 0.2637 |
| 5.1997 | 1.36 | 11500 | 4.5588 | 0.2674 |
| 5.1844 | 1.42 | 12000 | 4.5309 | 0.2708 |
| 5.1491 | 1.48 | 12500 | 4.4999 | 0.2748 |
| 5.1244 | 1.54 | 13000 | 4.4783 | 0.2780 |
| 5.1047 | 1.6 | 13500 | 4.4561 | 0.2812 |
| 5.0917 | 1.66 | 14000 | 4.4409 | 0.2826 |
| 5.0631 | 1.71 | 14500 | 4.4198 | 0.2851 |
| 5.0537 | 1.77 | 15000 | 4.4003 | 0.2881 |
| 5.0339 | 1.83 | 15500 | 4.3855 | 0.2899 |
| 5.0235 | 1.89 | 16000 | 4.3650 | 0.2921 |
| 5.0074 | 1.95 | 16500 | 4.3496 | 0.2942 |
| 4.9927 | 2.01 | 17000 | 4.3361 | 0.2965 |
| 4.9797 | 2.07 | 17500 | 4.3203 | 0.2981 |
| 4.9725 | 2.13 | 18000 | 4.3118 | 0.2995 |
| 4.9552 | 2.19 | 18500 | 4.2977 | 0.3012 |
| 4.956 | 2.25 | 19000 | 4.2894 | 0.3019 |
| 4.9427 | 2.31 | 19500 | 4.2781 | 0.3036 |
| 4.9337 | 2.36 | 20000 | 4.2773 | 0.3038 |
| 4.9333 | 2.42 | 20500 | 4.2624 | 0.3056 |
| 4.9173 | 2.48 | 21000 | 4.2643 | 0.3059 |
| 4.915 | 2.54 | 21500 | 4.2537 | 0.3069 |
| 4.9092 | 2.6 | 22000 | 4.2457 | 0.3084 |
| 4.9043 | 2.66 | 22500 | 4.2456 | 0.3081 |
| 4.9014 | 2.72 | 23000 | 4.2424 | 0.3087 |
| 4.8889 | 2.78 | 23500 | 4.2347 | 0.3104 |
| 4.8898 | 2.84 | 24000 | 4.2340 | 0.3095 |
| 4.8814 | 2.9 | 24500 | 4.2297 | 0.3100 |
| 4.8804 | 2.96 | 25000 | 4.2290 | 0.3095 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.2
|
mrm8488/llama-3-8b-ft-en-es-rag-16bit-merged | mrm8488 | 2024-05-18T23:48:12Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-18T23:42:52Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Uploaded model
- **Developed by:** mrm8488
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Rhma/LlamaCount5 | Rhma | 2024-05-18T23:44:28Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-18T23:41:00Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
TroyDoesAI/TinyLlama-RAG | TroyDoesAI | 2024-05-18T23:27:41Z | 135 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:cc-by-nc-nd-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-17T17:50:34Z | ---
license: cc-by-nc-nd-4.0
---
Known Issue:
- Model when asked for something does its best to use context but is not good at saying no, maybe needs more training. Ill give it another go, I hope its not a model size limitation, the larger models seem to get it.
Base Model : TinyLlama
Experimenting with Dataset Quality to improve generations, TinyLlama is faster to prototype datasets.
Overview
This model is meant to enhance adherence to provided context (e.g., for RAG applications) and reduce hallucinations, inspired by airoboros context-obedient question answer format.
## Overview
The format for a contextual prompt is as follows:
```
Contextual-Request:
BEGININPUT
BEGINCONTEXT
[key0: value0]
[key1: value1]
... other metdata ...
ENDCONTEXT
[insert your text blocks here]
ENDINPUT
[add as many other blocks, in the exact same format]
BEGININSTRUCTION
[insert your instruction(s). The model was tuned with single questions, paragraph format, lists, etc.]
ENDINSTRUCTION
```
I know it's a bit verbose and annoying, but after much trial and error, using these explicit delimiters helps the model understand where to find the responses and how to associate specific sources with it.
- `Contextual-Request:` - denotes the type of request pattern the model is to follow for consistency
- `BEGININPUT` - denotes a new input block
- `BEGINCONTEXT` - denotes the block of context (metadata key/value pairs) to associate with the current input block
- `ENDCONTEXT` - denotes the end of the metadata block for the current input
- [text] - Insert whatever text you want for the input block, as many paragraphs as can fit in the context.
- `ENDINPUT` - denotes the end of the current input block
- [repeat as many input blocks in this format as you want]
- `BEGININSTRUCTION` - denotes the start of the list (or one) instruction(s) to respond to for all of the input blocks above.
- [instruction(s)]
- `ENDINSTRUCTION` - denotes the end of instruction set
Here's a trivial, but important example to prove the point:
```
Contextual-Request:
BEGININPUT
BEGINCONTEXT
date: 2021-01-01
url: https://web.site/123
ENDCONTEXT
In a shocking turn of events, blueberries are now green, but will be sticking with the same name.
ENDINPUT
BEGININSTRUCTION
What color are bluberries? Source?
ENDINSTRUCTION
```
And the expected response:
```
### Contextual Response:
Blueberries are now green.
Source:
date: 2021-01-01
url: https://web.site/123
```
### References in response
As shown in the example, the dataset includes many examples of including source details in the response, when the question asks for source/citation/references.
Why do this? Well, the R in RAG seems to be the weakest link in the chain.
Retrieval accuracy, depending on many factors including the overall dataset size, can be quite low.
This accuracy increases when retrieving more documents, but then you have the issue of actually using
the retrieved documents in prompts. If you use one prompt per document (or document chunk), you know
exactly which document the answer came from, so there's no issue. If, however, you include multiple
chunks in a single prompt, it's useful to include the specific reference chunk(s) used to generate the
response, rather than naively including references to all of the chunks included in the prompt.
For example, suppose I have two documents:
```
url: http://foo.bar/1
Strawberries are tasty.
url: http://bar.foo/2
The cat is blue.
```
If the question being asked is `What color is the cat?`, I would only expect the 2nd document to be referenced in the response, as the other link is irrelevant. |
TroyDoesAI/Phi-3-Context-Obedient-RAG-7B | TroyDoesAI | 2024-05-18T23:23:00Z | 10 | 0 | transformers | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"conversational",
"custom_code",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-12T04:44:12Z | MAY-12-2024 : DEPTH UP BEST CONFIG BASED ON SNR AND MY INTUITION ON PERFORMANCE AFFECTS OF EACH LAYER ADDED
---
license: cc-by-sa-4.0
---
Base Model : microsoft/Phi-3-mini-128k-instruct
Overview
This model is meant to enhance adherence to provided context (e.g., for RAG applications) and reduce hallucinations, inspired by airoboros context-obedient question answer format.
---
license: cc-by-4.0
---
Colab:
https://github.com/Troys-Code/AI_For_Free/blob/main/TroyDoesAI_Phi_3_128k_Context_Obedient_RAG_Depth_Up_Colab_TextGen_GPU_.ipynb
## Overview
The format for a contextual prompt is as follows:
```
BEGININPUT
BEGINCONTEXT
[key0: value0]
[key1: value1]
... other metdata ...
ENDCONTEXT
[insert your text blocks here]
ENDINPUT
[add as many other blocks, in the exact same format]
BEGININSTRUCTION
[insert your instruction(s). The model was tuned with single questions, paragraph format, lists, etc.]
ENDINSTRUCTION
```
I know it's a bit verbose and annoying, but after much trial and error, using these explicit delimiters helps the model understand where to find the responses and how to associate specific sources with it.
- `BEGININPUT` - denotes a new input block
- `BEGINCONTEXT` - denotes the block of context (metadata key/value pairs) to associate with the current input block
- `ENDCONTEXT` - denotes the end of the metadata block for the current input
- [text] - Insert whatever text you want for the input block, as many paragraphs as can fit in the context.
- `ENDINPUT` - denotes the end of the current input block
- [repeat as many input blocks in this format as you want]
- `BEGININSTRUCTION` - denotes the start of the list (or one) instruction(s) to respond to for all of the input blocks above.
- [instruction(s)]
- `ENDINSTRUCTION` - denotes the end of instruction set
Here's a trivial, but important example to prove the point:
```
BEGININPUT
BEGINCONTEXT
date: 2021-01-01
url: https://web.site/123
ENDCONTEXT
In a shocking turn of events, blueberries are now green, but will be sticking with the same name.
ENDINPUT
BEGININSTRUCTION
What color are bluberries? Source?
ENDINSTRUCTION
```
And the expected response:
```
Blueberries are now green.
Source:
date: 2021-01-01
url: https://web.site/123
```
### References in response
As shown in the example, the dataset includes many examples of including source details in the response, when the question asks for source/citation/references.
Why do this? Well, the R in RAG seems to be the weakest link in the chain.
Retrieval accuracy, depending on many factors including the overall dataset size, can be quite low.
This accuracy increases when retrieving more documents, but then you have the issue of actually using
the retrieved documents in prompts. If you use one prompt per document (or document chunk), you know
exactly which document the answer came from, so there's no issue. If, however, you include multiple
chunks in a single prompt, it's useful to include the specific reference chunk(s) used to generate the
response, rather than naively including references to all of the chunks included in the prompt.
For example, suppose I have two documents:
```
url: http://foo.bar/1
Strawberries are tasty.
url: http://bar.foo/2
The cat is blue.
```
If the question being asked is `What color is the cat?`, I would only expect the 2nd document to be referenced in the response, as the other link is irrelevant.
base_model:
- TroyDoesAI/Phi-3-Context-Obedient-RAG
library_name: transformers
tags:
- mergekit
- merge
---
# RAG-Depth-Up
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the passthrough merge method.
### Models Merged
The following models were included in the merge:
* [TroyDoesAI/Phi-3-Context-Obedient-RAG](https://huggingface.co/TroyDoesAI/Phi-3-Context-Obedient-RAG)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: TroyDoesAI/Phi-3-Context-Obedient-RAG
layer_range: [0, 8]
- sources:
- model: TroyDoesAI/Phi-3-Context-Obedient-RAG
layer_range: [4, 12]
- sources:
- model: TroyDoesAI/Phi-3-Context-Obedient-RAG
layer_range: [8, 16]
- sources:
- model: TroyDoesAI/Phi-3-Context-Obedient-RAG
layer_range: [12, 20]
- sources:
- model: TroyDoesAI/Phi-3-Context-Obedient-RAG
layer_range: [16, 24]
- sources:
- model: TroyDoesAI/Phi-3-Context-Obedient-RAG
layer_range: [20, 28]
- sources:
- model: TroyDoesAI/Phi-3-Context-Obedient-RAG
layer_range: [24, 32]
merge_method: passthrough
dtype: bfloat16
```
|
apwic/sentiment-lora-r2a2d0.15-1 | apwic | 2024-05-18T23:18:17Z | 0 | 0 | null | [
"tensorboard",
"generated_from_trainer",
"id",
"base_model:indolem/indobert-base-uncased",
"base_model:finetune:indolem/indobert-base-uncased",
"license:mit",
"region:us"
] | null | 2024-05-18T22:45:12Z | ---
language:
- id
license: mit
base_model: indolem/indobert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: sentiment-lora-r2a2d0.15-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sentiment-lora-r2a2d0.15-1
This model is a fine-tuned version of [indolem/indobert-base-uncased](https://huggingface.co/indolem/indobert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3633
- Accuracy: 0.8396
- Precision: 0.8128
- Recall: 0.7890
- F1: 0.7992
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 30
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.5664 | 1.0 | 122 | 0.5221 | 0.7218 | 0.6580 | 0.6432 | 0.6487 |
| 0.5148 | 2.0 | 244 | 0.5111 | 0.7243 | 0.6758 | 0.6899 | 0.6810 |
| 0.4924 | 3.0 | 366 | 0.4791 | 0.7444 | 0.6884 | 0.6741 | 0.6799 |
| 0.4615 | 4.0 | 488 | 0.4651 | 0.7644 | 0.7148 | 0.7058 | 0.7099 |
| 0.4516 | 5.0 | 610 | 0.4581 | 0.7644 | 0.7214 | 0.7408 | 0.7286 |
| 0.4291 | 6.0 | 732 | 0.4295 | 0.7895 | 0.7462 | 0.7385 | 0.7421 |
| 0.4194 | 7.0 | 854 | 0.4191 | 0.7995 | 0.7581 | 0.7606 | 0.7593 |
| 0.3994 | 8.0 | 976 | 0.4048 | 0.8120 | 0.7745 | 0.7645 | 0.7691 |
| 0.3919 | 9.0 | 1098 | 0.3950 | 0.8246 | 0.7954 | 0.7659 | 0.7778 |
| 0.3762 | 10.0 | 1220 | 0.3881 | 0.8271 | 0.8022 | 0.7626 | 0.7777 |
| 0.3704 | 11.0 | 1342 | 0.3806 | 0.8271 | 0.7949 | 0.7776 | 0.7853 |
| 0.3642 | 12.0 | 1464 | 0.3733 | 0.8421 | 0.8122 | 0.8008 | 0.8061 |
| 0.3614 | 13.0 | 1586 | 0.3753 | 0.8321 | 0.8092 | 0.7687 | 0.7842 |
| 0.3474 | 14.0 | 1708 | 0.3695 | 0.8396 | 0.8155 | 0.7840 | 0.7969 |
| 0.3479 | 15.0 | 1830 | 0.3675 | 0.8421 | 0.8142 | 0.7958 | 0.8040 |
| 0.3347 | 16.0 | 1952 | 0.3649 | 0.8421 | 0.8142 | 0.7958 | 0.8040 |
| 0.335 | 17.0 | 2074 | 0.3653 | 0.8371 | 0.8114 | 0.7822 | 0.7943 |
| 0.3361 | 18.0 | 2196 | 0.3632 | 0.8396 | 0.8128 | 0.7890 | 0.7992 |
| 0.3343 | 19.0 | 2318 | 0.3636 | 0.8371 | 0.8114 | 0.7822 | 0.7943 |
| 0.3347 | 20.0 | 2440 | 0.3633 | 0.8396 | 0.8128 | 0.7890 | 0.7992 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.15.2
|
DUAL-GPO/phi-2-dpo-chatml | DUAL-GPO | 2024-05-18T23:17:10Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"phi",
"alignment-handbook",
"generated_from_trainer",
"trl",
"dpo",
"custom_code",
"dataset:HuggingFaceH4/ultrafeedback_binarized",
"base_model:DUAL-GPO/phi-2-sft-lora-ultrachat-merged",
"base_model:adapter:DUAL-GPO/phi-2-sft-lora-ultrachat-merged",
"license:apache-2.0",
"region:us"
] | null | 2024-05-18T16:24:18Z | ---
license: apache-2.0
library_name: peft
tags:
- alignment-handbook
- generated_from_trainer
- trl
- dpo
- generated_from_trainer
datasets:
- HuggingFaceH4/ultrafeedback_binarized
base_model: DUAL-GPO/phi-2-sft-lora-ultrachat-merged
model-index:
- name: phi-2-dpo-chatml
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# phi-2-dpo-chatml
This model is a fine-tuned version of [DUAL-GPO/phi-2-sft-lora-ultrachat-merged](https://huggingface.co/DUAL-GPO/phi-2-sft-lora-ultrachat-merged) on the HuggingFaceH4/ultrafeedback_binarized dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.1.2+cu118
- Datasets 2.14.6
- Tokenizers 0.15.2 |
SaffalPoosh/swin-tiny-patch4-window7-224-finetuned-eurosat | SaffalPoosh | 2024-05-18T23:12:54Z | 216 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"swin",
"image-classification",
"generated_from_trainer",
"base_model:microsoft/swin-tiny-patch4-window7-224",
"base_model:finetune:microsoft/swin-tiny-patch4-window7-224",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2024-05-18T23:12:39Z | ---
license: apache-2.0
base_model: microsoft/swin-tiny-patch4-window7-224
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: swin-tiny-patch4-window7-224-finetuned-eurosat
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-eurosat
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9148
- Accuracy: 0.3938
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 2.6414 | 0.9630 | 13 | 2.4260 | 0.2746 |
| 2.4558 | 2.0 | 27 | 2.1558 | 0.3368 |
| 2.1176 | 2.9630 | 40 | 2.0164 | 0.3834 |
| 2.0264 | 4.0 | 54 | 1.9314 | 0.3886 |
| 1.9409 | 4.8148 | 65 | 1.9148 | 0.3938 |
### Framework versions
- Transformers 4.40.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
FluffyKaeloky/Deep-Miqu-120B-exl2-4.0bpw | FluffyKaeloky | 2024-05-18T23:12:08Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"base_model:jukofyork/Dark-Miqu-70B",
"base_model:quantized:jukofyork/Dark-Miqu-70B",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"exl2",
"region:us"
] | text-generation | 2024-05-18T22:13:59Z | ---
base_model:
- jukofyork/Dark-Miqu-70B
- jukofyork/Dawn-Miqu-70B
library_name: transformers
tags:
- mergekit
- merge
license: other
---
Exl2(4.0bpw) quant of [Deep-Miqu-120](https://huggingface.co/jukofyork/Deep-Miqu-120B).
# Original Model Card

A creative writing `120b` parameter "frankenmerge" model with 32k context.
# Model background
Created using [Mergekit](https://github.com/arcee-ai/mergekit) from my two `70b` parameter miqu-based models: [Dark-Miqu-70B](https://huggingface.co/jukofyork/Dark-Miqu-70B) and [Dawn-Miqu-70B](https://huggingface.co/jukofyork/Dawn-Miqu-70B).
- To fix problems with "backwards time skips" in the generated stories, the "standard" interleave pattern was replaced by repeated blocks (see [here](https://github.com/arcee-ai/mergekit/issues/198#issuecomment-2081174251)).
- To help maintain cohesion, the '`q_proj`', '`k_proj`' and '`down_proj`' tensors were all scaled to hypothesised upper-bound values (see [here](https://github.com/arcee-ai/mergekit/issues/198#issuecomment-2063716974)).
My hope was this would act like a longer-context version of [goliath-120b](https://huggingface.co/alpindale/goliath-120b), as [Dawn-Miqu-70B](https://huggingface.co/jukofyork/Dawn-Miqu-70B) has a lot of [
Xwin-LM-70B-V0.1 ](https://huggingface.co/Xwin-LM/Xwin-LM-70B-V0.1) in it and [Dark-Miqu-70B](https://huggingface.co/jukofyork/Dark-Miqu-70B) has [Euryale-1.3-L2-70B](https://huggingface.co/Sao10K/Euryale-1.3-L2-70B) in it.
# Prompting format
Vicuna format is preferred:
```
USER: {prompt} ASSISTANT:
```
Mistral and Alpaca formats are also supported:
```
[INST] {prompt} [/INST]
```
```
### Instruction:
{prompt}
### Response:
```
# Licence and usage restrictions
[miqu-1-70b-sf](https://huggingface.co/152334H/miqu-1-70b-sf) is a dequantized version of the [miqu-1-70b](https://huggingface.co/miqudev/miqu-1-70b) model leaked from MistralAI. All miqu-derived models, including this merge, are suitable for non-commercial, personal use only.
# Mergekit configuration
The following YAML configuration was used to produce this model:
```yaml
const_tag: &MODEL1 jukofyork/Dawn-Miqu-70B
const_tag: &MODEL2 jukofyork/Dark-Miqu-70B
const_tag: &QK_ATTENUATION_FACTOR 0.8408964153 # sqrt(sqrt(1/2))
const_tag: &MLP_DOWN_SCALE_FACTOR 0.7071067812 # sqrt(1/2)
scale-filter-env: &scale_filter_env
parameters:
scale:
- filter: q_proj
value: *QK_ATTENUATION_FACTOR
- filter: k_proj
value: *QK_ATTENUATION_FACTOR
- filter: down_proj
value: *MLP_DOWN_SCALE_FACTOR
- value: 1.0
slices:
- sources:
- model: *MODEL1
layer_range: [0, 10]
- sources:
- model: *MODEL1
layer_range: [10, 20]
<<: *scale_filter_env
- sources:
- model: *MODEL2
layer_range: [10, 20]
<<: *scale_filter_env
- sources:
- model: *MODEL1
layer_range: [20, 30]
<<: *scale_filter_env
- sources:
- model: *MODEL2
layer_range: [20, 30]
<<: *scale_filter_env
- sources:
- model: *MODEL1
layer_range: [30, 40]
<<: *scale_filter_env
- sources:
- model: *MODEL2
layer_range: [30, 40]
<<: *scale_filter_env
- sources:
- model: *MODEL1
layer_range: [40, 50]
<<: *scale_filter_env
- sources:
- model: *MODEL2
layer_range: [40, 50]
<<: *scale_filter_env
- sources:
- model: *MODEL1
layer_range: [50, 60]
<<: *scale_filter_env
- sources:
- model: *MODEL2
layer_range: [50, 60]
<<: *scale_filter_env
- sources:
- model: *MODEL1
layer_range: [60, 70]
<<: *scale_filter_env
- sources:
- model: *MODEL2
layer_range: [60, 70]
<<: *scale_filter_env
- sources:
- model: *MODEL1
layer_range: [70, 80]
merge_method: passthrough
dtype: float16
```
## Key configuration details:
- '`merge_method: passthrough`' passes input tensors through unmodified.
- '`filter`' selects the required tensor(s) based on their name(s).
- '`scale`' scales the weights in the select tensors.
See the [Mergekit documentation](https://github.com/arcee-ai/mergekit) for more on these settings.
# Example stories
The following mix of "dark" stories were generated using the Vicuna prompt format with no system message and temperature=0:
## Dark fantasy stories
<details> <summary>Click to see spoiler</summary>
### USER
Write me the opening chapter of a 'Grimdark' stories trilogy in the style of Joe Abercrombie and Rob J Hayes
### ASSISTANT
Chapter One: The Crimson Feast
The world was a festering wound, and the city of Dreghast its rotten heart. Nestled in the crook of the Cursed River, it throbbed with the pulsating rhythm of vice and depravity. The stench of decay and desperation hung heavy in the air, a miasma that clung to the soul like a lover's final embrace.
In the bowels of this putrid metropolis, where the sun's feeble rays dared not tread, a man named Riven Crow sat hunched over a rickety table, his one good eye fixed on the cards in his scarred hands. The other eye, a milky orb of worthlessness, stared blankly at the grimy floorboards, a constant reminder of the price of his many sins.
The tavern, if one could generously bestow such a title upon the hovel, was a cacophony of drunken laughter, clattering tankards, and the occasional scream of pain or pleasure. The patrons were a motley crew of cutthroats, whores, and other miscreants who had slithered from the city's filthy underbelly like maggots drawn to a rotting corpse. Riven fit in perfectly.
His ragged cloak, stained with the blood of countless foes, concealed a lithe frame honed by years of hardship and a life spent dancing on the razor's edge. Beneath the grime and the tangled mass of black hair that framed his gaunt features, sharp cheekbones and a cruel mouth spoke of a man who had known nothing but suffering and had learned to relish in dishing it out in equal measure.
Across the table, his opponent, a burly brute with a face that looked as if it had been carved by a drunken butcher, sneered, revealing a row of yellowed teeth. "I'm tired of yer games, cripple," he growled, his meaty fists clenching around the hilt of his dagger. "Show yer hand or I'll carve it from yer twitchin' wrist."
Riven's lips curled into a sardonic smirk, a twisted mockery of a smile. "Patience, my friend," he drawled, his voice a raspy whisper that carried the weight of a thousand untold horrors. "The best part of the dance is always the moment before the blade finds its mark."
The tension in the room thickened, the air crackling with the promise of imminent violence. The other patrons, wise enough to know better, scurried away like rats sensing the approach of a predator.
With a fluid grace that belied his disheveled appearance, Riven flipped his cards onto the table, revealing a hand that would have made even the most seasoned gambler pale. The Three of Knives, the Queen of Serpents, and the Ace of Graves - the infamous Crimson Feast. The ultimate trump in the deadly game of Shadows and Bones.
The brute's eyes widened in disbelief, then narrowed to slits as he roared a curse that would have made even the most hardened sailor blush. In a blur of motion, he lunged across the table, his dagger glinting in the dim candlelight.
But Riven was faster. Much faster.
His left hand, adorned with a collection of crude silver rings, snaked out like a viper, catching the brute's wrist in an iron grip. With a sickening crunch, he twisted the man's arm, eliciting a howl of agony that was abruptly silenced as his hidden dagger, concealed in his sleeve, found its home in the base of his throat.
As the brute's lifeless form slumped forward, Riven casually collected his winnings - a meager pile of copper coins and a tarnished amulet depicting a snarling wolf - and downed the dregs of his ale.
"Another round, wench," he called out to the buxom barmaid, who eyed him warily from the safety of the counter. She knew better than to cross a man who wore death like a second skin.
As the night wore on, Riven's fortunes continued to rise, and the bodies piled higher. He was a predator in a sea of prey, a shark among minnows, and he reveled in the sweet taste of their fear.
But even sharks had their predators.
A shadowy figure, cloaked in darkness and reeking of decay, slid into the vacant seat across from him. Riven's instincts, honed by years on the streets, screamed a warning, but it was too late.
"Leave now, mortal," the figure rasped, its voice like the rustle of ancient parchment. "You've stolen what isn't yours, and the price is steeper than your pitiful life."
Riven's hand instinctively went for his dagger, but found only empty air. He glanced down to see the hilt protruding from the table, the blade buried deep in the wood mere inches from his palm.
"Clever," he sneered, undeterred. "But I've faced worse than the likes of you."
The figure chuckled, a sound that sent shivers down Riven's spine. "You have no idea, little fly. You've meddled in affairs beyond your comprehension, and now you shall pay the price."
With a flick of its skeletal wrist, the figure produced a deck of tarot-like cards, each one etched with intricate symbols that seemed to writhe and pulse with malevolent energy. "Let us see what the cards have in store for you, shall we?"
As the first card flipped over, Riven's world plunged into darkness, and he knew, with a cold, gut-wrenching certainty, that his night was about to get a whole lot worse.
Welcome to the beginning of 'The Crimson Trilogy', a grimdark saga of betrayal, vengeance, and the thin line between hero and villain. In the style of Joe Abercrombie and Rob J Hayes, prepare to delve into a world where the only law is survival, and the only currency is blood.
</details>
<details> <summary>Click to see spoiler</summary>
### USER
Write me the opening chapter of a Grimdark trilogy in the style of Joe Abercrombie and Rob J Hayes. Use third person personal and feature internal monologues of the characters. The POV character for chapter 1 is a cultist who has just escaped his cult. He is dressed in dirty yellow robes and his only possession is a mysterious small (magical!?) mirror he stole from the cult. The story starts with him arriving at an apparently deserted ghost town on the edge of a desert. He has an arrow lodged in his shoulder and is losing his mind due to infection and thirst.
### ASSISTANT
The sun was a merciless tyrant, its scorching gaze searing through the tattered remnants of the cultist's once-yellow robes. He staggered into the desolate husk of a town, a forgotten relic on the ragged hem of the Great Sandsea. Each step sent jolts of agony coursing through his body, a symphony of torment orchestrated by the unwelcome guest lodged in his shoulder - a cruel reminder of his recent betrayal.
His name, if it ever held any meaning, had been scoured from his parched lips by the relentless desert winds. Now, he was simply the Forsaken One, a title he wore with a twisted sense of pride. The cult's whispers echoed in his fevered mind, their venomous words a constant companion on his treacherous pilgrimage.
"You'll never escape our grasp, heretic," they hissed, their spectral voices a chorus of malice. "The desert will claim you, and the sands will swallow your soul."
He spat a mouthful of blood-flecked spittle at the unyielding earth, a defiant retort to the phantoms that haunted him. "I've already danced with the void's embrace," he growled, his voice a hoarse rasp. "You're too late for the feast."
The Forsaken One's vision swam, the world around him blurring into a kaleidoscope of shimmering mirages. He clutched at the small, intricate mirror hidden within his robes, its cool touch a lifeline in the sea of madness threatening to consume him. The relic, stolen from the very heart of the cult's twisted sanctuary, was his only hope for redemption or perhaps, a swift end.
The ghost town's skeletal buildings loomed like the decayed teeth of some ancient beast, their vacant windows staring down at him with accusatory silence. He stumbled towards the largest structure, a crumbling edifice that might have once been a tavern or an inn. Anything that could offer respite from the relentless sun and the gnawing claws of thirst.
As he collapsed against the weathered wooden door, it groaned in protest, reluctantly yielding to his feeble strength. The interior was a mausoleum of dust and decay, the air thick with the stench of death and abandonment. He didn't care. Anywhere was better than the merciless embrace of the sun-scorched wastes.
The cultist slumped to the floor, his back sliding against the rough-hewn wall. He fumbled with trembling hands, pulling the hood of his robes back to reveal a face etched with lines of pain and determination. Sunken eyes, the color of storm-tossed seas, scanned the room, searching for any sign of life or, gods forbid, death.
Finding none, he allowed himself a moment of respite, his chest heaving with ragged breaths. The arrowhead protruded from his shoulder, a mocking reminder of his folly. He knew he needed to remove it, but the thought of the agony that would follow sent shivers down his spine.
"Better the devil you know," he muttered to himself, his voice barely audible above the pounding of his heart.
His fingers brushed against the mirror's smooth surface, and a flicker of desperate hope ignited within him. The object pulsed with an otherworldly energy, a beacon in the darkness of his despair. He had seen what it could do in the hands of the cult's twisted leaders - miracles and horrors alike. Perhaps it held the key to his salvation or, at the very least, a reprieve from the agony that gnawed at his soul.
With a resigned sigh, he drew the mirror from its concealment, the light catching its intricate designs and casting eerie patterns across the room. The surface rippled like a pool of ink, and for a fleeting moment, he swore he saw a pair of eyes staring back at him, filled with a hunger that transcended mortal understanding.
"Please," he whispered, his voice cracking with desperation. "Show me a way out of this hell."
The mirror's surface swirled, and the darkness within it deepened, as if the very fabric of reality was being torn asunder. A voice, ancient and malevolent, whispered from its depths, "A bargain, mortal? Your soul for a sip of sweet relief?"
The Forsaken One hesitated, the weight of his decision heavy on his shoulders. He had already betrayed his faith once. What was one more transgression against the divine tapestry?
"Anything," he croaked, his voice barely more than a whisper. "Just make it stop."
As the mirror's darkness enveloped him, the cultist's last coherent thought was a fleeting regret for the man he once was, and the monster he was about to become.
</details>
<details> <summary>Click to see spoiler</summary>
### USER
Write me the opening chapter of a Grimdark trilogy in the style of Joe Abercrombie and Rob J Hayes. Use third person personal and feature internal monologues of the characters. The POV character for chapter 1 is a 'Mercer' who's job is to put the mortally wounded out of their mystery after a battle. He has no conscience about what he does. The story starts the morning after the battle and the moor where it took place is blanketed in mist. Something stirs a flashback in the POV character.
### ASSISTANT
The morning after the slaughter, the moor was a tapestry of steel and sinew, woven through with the crimson threads of life's final stitches. The mist hung heavy, like a shroud drawn over the sins of the night, obscuring the sun's feeble attempts to cast judgement on the carnage below. It was in this morbid twilight that the Mercer, known only as Crowsong, went about his grim work.
He moved with the grace of a predator, his boots barely disturbing the sodden earth as he wove through the tangled mass of the fallen. His eyes, cold and unfeeling, scanned the battlefield, seeking out the twitching forms that still clung to the frayed edges of existence. His hands, calloused and stained, gripped the hilt of his blade, a cruel instrument of mercy, as he approached his next charge.
The man, or what was left of him, lay on his back, his chest a gaping maw that spewed forth the remnants of his insides like a grotesque offering to the gods of war. His eyes, wide and glassy, stared up at the swirling mists above, seeing nothing but the cold embrace of the void. Crowsong knelt beside him, the wetness of the ground seeping through his trousers, unnoticed and uncared for.
"Shh," he whispered, his voice a soothing rasp, like the rustle of autumn leaves. "It's over now. Time to rest."
The dying man's lips trembled, a soundless plea for salvation or perhaps just a final curse, but Crowsong paid it no heed. He had heard them all before, the desperate cries of the damned, and they held no sway over him. His duty was clear, and he would see it through to the bitter end.
As he raised his blade, the man's hand, a twisted claw of bone and ruined flesh, latched onto his wrist, the grip surprisingly strong for one so close to the abyss. Crowsong's eyes narrowed, a flicker of irritation dancing across his otherwise impassive features.
"Please," the man croaked, his voice a wet rasp. "Not like this. Not in the dark."
Crowsong paused, the blade hovering mere inches from the man's throat. It was a sentiment he had heard countless times before, and one he had never understood. Death, he had long ago decided, was the great equalizer. It cared not for the light or the dark, the rich or the poor, the brave or the cowardly. It came for all in the end, and it was his job to usher them into its cold embrace.
But something about this man's plea, the raw terror in his eyes, stirred something deep within him, a memory long buried beneath the layers of blood and gore that coated his soul.
He saw himself, a boy of no more than ten summers, cowering in the corner of a burning hovel, his mother's lifeless form crushing him beneath her. He heard the screams of his village, the clash of steel, the cries of the dying. And he felt the cold steel of a mercenary's blade, the pain searing through his body like a thousand white-hot brands.
And then, the darkness.
Crowsong shook his head, banishing the unwelcome memories back to the recesses of his mind. He could not afford to be swayed by sentiment. Not here, not now. He was a Mercer, a harbinger of the end, and his duty was clear.
With a final, almost apologetic sigh, he drove his blade through the man's throat, severing his windpipe and silencing his pleas forever. The grip on his wrist went slack, and the light in the man's eyes faded, replaced by the vacant stare of the truly departed.
He wiped the blood from his blade on the man's tattered tunic and rose to his feet, the weight of another soul added to the countless others that already weighed heavy on his conscience. Or at least, it would have, if he still possessed such a thing.
As he continued his grim harvest, the mist began to lift, revealing the full extent of the carnage that had taken place on this cursed moor. Bodies lay strewn as far as the eye could see, a sea of broken dreams and shattered lives, all sacrificed on the altar of ambition and greed.
Crowsong surveyed the scene with a detached indifference, his heart as cold and lifeless as the corpses that surrounded him. He had seen worse, much worse, in his many years plying his trade. But something about this battle, this place, gnawed at the edges of his mind, like a persistent itch that refused to be scratched.
He shook off the unease, attributing it to the fatigue that clung to him like a second skin. He had been at it for hours, and even the most hardened of souls needed rest.
As he sheathed his blade and turned to leave, a glint of metal caught his eye, half-buried in the mud a few paces away. He approached, curiosity piquing his interest, and knelt to retrieve it.
It was a locket, simple in design but well-crafted, its once-polished surface now marred by the filth of the battlefield. He opened it, revealing a small, faded portrait of a woman and a child, their faces forever frozen in time, a testament to a love that had been cruelly snuffed out on this very ground.
For a moment, Crowsong hesitated, the ghost of a smile playing at the corners of his lips. He knew all too well the pain of losing those you loved, the hollow ache that never truly went away.
But then, as quickly as it had come, the emotion was gone, replaced by the cold, hard resolve that had carried him through countless battles and even more grisly aftermaths. He pocketed the locket, adding it to the collection that jangled at his belt, a grim reminder of the lives he had taken and the debts he would never be able to repay.
As he walked away, the mist began to close in once more, swallowing the battlefield and its secrets whole, as if the very earth itself sought to forget the atrocities that had been committed upon it.
But Crowsong knew better. The dead, they never truly rested. Not while there were still stories to be told, and debts to be paid.
And he, the Mercer of the Mists, would see to it that their voices were heard, even if it cost him his own damned soul.
</details>
<details> <summary>Click to see spoiler</summary>
### USER
Write me the opening chapter of a Grimdark trilogy in the style of Joe Abercrombie and Rob J Hayes. Use third person personal and feature internal monologues of the characters. The POV character for chapter 1 is a man awaiting execution for a crime he didn't actually commit. His cell mate is a much larger man, who is a notorious murderer and rapist. One evening they overhear the guards talking about some "... threat from the North" but don't catch the whole conversation. The next morning they awake to find the guards have all fled and now face a new predicament: how to get out of the locked cell before they die of thirst or starvation.
### ASSISTANT
The stench of piss and despair clung to the damp stone walls of the cell like a lover's embrace, a constant reminder of the futility of hope. The flickering torchlight danced across the grimy bars, casting grotesque shadows that seemed to mock the two occupants. In one corner, huddled against the cold, sat a man who had once been known as Thane, the Butcher's Bane. Now, he was just another piece of filth awaiting the noose's embrace.
"You hear that, you maggot?" rasped a voice from the opposite side of the cell. Thane didn't need to look up to know it was his cellmate, Grimgor, the Mountain's Bane. A fitting name for a man who could crush a man's skull between his meaty thighs. "Sounded like they're scared of somethin'."
Thane grunted, feigning disinterest. He'd learned early on that the less he spoke, the less likely Grimgor was to take an interest in him. Besides, what did it matter? He was innocent, but in this gods-forsaken hole, innocence was as valuable as a eunuch's cock.
"I said, you hear that?" Grimgor growled, his voice rumbling like distant thunder.
"Aye, I heard," Thane muttered, his voice raspy from disuse. "Probably just rats scurrying in the walls."
Grimgor snorted, a sound that would have been amusing under different circumstances. "Rats don't sound like men pissin' themselves, boy."
Thane sighed and leaned his head against the cold stone. He'd been in this cell for what felt like an eternity, accused of a crime he hadn't committed. The irony was bitter as the rotgut they served for ale. He, a man who'd spent his life meting out justice, now awaiting his own unjust end.
"You think they'll come for us?" Grimgor asked, his tone surprisingly soft.
"Who?" Thane asked, curiosity getting the better of him.
"The guards," Grimgor clarified. "They've not brought us food or water since last night. Somethin' ain't right."
Thane considered this for a moment. It was true, the usual clanging of keys and gruff voices had been conspicuously absent. He'd chalked it up to the guards being drunk on their own supply, but the longer the silence stretched, the more uneasy he became.
"I don't know," he finally admitted. "But if they don't, we'll be dead before the week's end."
Grimgor grunted in agreement, the silence settling between them like a heavy shroud. Thane closed his eyes, trying to block out the stench and the gnawing hunger in his belly. He drifted off into a fitful sleep, plagued by dreams of his past glories and the betrayal that had led him here.
He awoke to the sound of metal scraping against stone. His eyes snapped open, heart pounding in his chest. The cell was still shrouded in darkness, but the usual sounds of the dungeon were conspicuously absent. No distant laughter, no clanging of chains, no moans of the damned. Just silence.
"Grimgor," he whispered, nudging the behemoth beside him. "Wake up."
Grimgor grunted and stirred, his massive frame shifting like a slumbering bear. "What is it?"
"Listen," Thane hissed, straining his ears.
For a moment, there was nothing but the sound of their ragged breathing. Then, faintly, they heard it. The unmistakable sound of footsteps, followed by the jingling of keys.
"About bloody time," Grimgor growled, sitting up with a groan. "I'm so hungry I could eat a horse's arse."
Thane's stomach twisted. He'd heard that tone before, just before Grimgor had turned on one of their fellow inmates during a particularly lean stretch. He'd managed to avoid being on the receiving end of it so far, but he knew his luck wouldn't hold out forever.
The footsteps grew louder, accompanied by the scrape of a torch being lit. Thane's heart hammered in his chest as the flickering light illuminated the cell door. But instead of the familiar sneer of their usual guard, they were greeted by the sight of an empty corridor.
"What in the seven hells?" Grimgor muttered, confusion etched on his brutish features.
Thane scrambled to his feet, his chains rattling like a warning bell. "They've left us," he breathed, disbelief lacing his voice.
"Left us?" Grimgor repeated, as if the words were foreign. "Why would they do that?"
"I don't know," Thane said, his mind racing. "But we're not going to find out sitting here."
He crossed the cell in three quick strides, grabbing the rusty bars and rattling them with all his strength. They held fast, as they had a thousand times before.
"Damn it!" he cursed, slamming his fist against the cold metal.
"Easy, boy," Grimgor rumbled, lumbering to his feet. "You'll just break your pretty little hands."
Thane glared at him, but the truth of his words stung like a slap to the face. He took a deep breath, trying to calm his racing heart. They needed to think, not panic.
"We need to find a way out," he said, his mind working furiously. "There has to be another way."
Grimgor grunted in agreement and began pacing the cell, his massive frame making the very stones tremble. Thane watched him, his own thoughts racing. They were trapped like rats in a cage, with no way out and no idea what awaited them on the other side.
Suddenly, a memory surfaced, as clear as if it had happened yesterday. The night before, he'd overheard the guards talking about some "threat from the North." They'd been too drunk to care who was listening, their voices carrying through the thin stone walls. He'd dismissed it as drunken ramblings at the time, but now...
"Grimgor," he said, turning to his cellmate. "Do you remember the guards talking about something last night?"
The big man paused, scratching his greasy head. "Might've heard somethin' 'bout the North, why?"
"What did they say?" Thane pressed, his heart quickening.
Grimgor frowned, his brow furrowing in concentration. "Somethin' 'bout the walls not holdin', I think. And... and somethin' 'bout the dead walkin'."
A chill ran down Thane's spine. He'd heard tales of such things, whispered around campfires and in the darkest corners of taverns. But he'd always dismissed them as the ramblings of madmen and drunkards.
"Dead men don't walk," he muttered, more to himself than to Grimgor.
"Maybe not where you're from," Grimgor said, his voice laced with a hint of menace. "But where I'm from, you learn to believe in the unbelievable."
Thane swallowed the lump in his throat. He didn't want to believe it, but the evidence was mounting. The deserted dungeon, the missing guards, the eerie silence... it all pointed to one thing.
"We have to get out of here," he said, urgency lacing his voice. "Now."
Grimgor nodded, his eyes gleaming in the torchlight. "Aye, I'm with you, lad. But how?"
Thane looked around the cell, his gaze settling on the rusted iron bars that held them captive. An idea began to form in his mind, as mad as it was desperate.
"We'll have to break through," he said, his voice barely above a whisper.
Grimgor barked a laugh that echoed off the stone walls. "You're mad, boy. These bars are thicker than my bloody wrists!"
"I know," Thane said, a grim smile twisting his lips. "But desperate times call for desperate measures."
He began to strip off his tattered tunic, revealing the lean muscles that still clung to his frame despite the meager rations. He'd always been a fighter, even before he'd taken up the mantle of the Butcher's Bane. He'd faced worse odds and come out alive. He would do so again.
"What're you doin'?" Grimgor asked, confusion etched on his face.
"Give me your belt," Thane said, holding out a hand.
Grimgor hesitated for a moment, then shrugged and complied. Thane quickly fashioned the leather into a makeshift harness, looping it around his wrists and knotting it tight.
"What's this for?" Grimgor asked, eyeing the makeshift contraption with suspicion.
"You're about to find out," Thane said, a glint of determination in his eyes. "Get ready to pull."
He positioned himself in front of the bars, feet braced against the wall, and took a deep breath. Then, with a roar that echoed through the empty dungeon, he threw his entire weight against the metal.
The bars groaned and shifted, but held fast. Thane gritted his teeth and tried again, ignoring the searing pain that shot up his arms. On the third try, he felt a tiny give.
"That's it!" he grunted, sweat beading on his brow. "Pull!"
Grimgor braced himself against the far wall and pulled with all his considerable strength. The bars groaned louder, the sound like nails on a chalkboard. Thane could feel the muscles in his arms and back straining, on the verge of snapping.
"Again!" he gasped, his vision swimming.
With a deafening crack, the bars gave way, sending Thane sprawling to the floor. He lay there for a moment, panting and gasping for air, as Grimgor stared down at him in disbelief.
"By the gods," the big man breathed. "I didn't think you had it in you."
Thane managed a weak smile, wincing as he sat up. "Neither did I."
They worked in silence for what felt like hours, widening the gap in the bars enough for Thane to squeeze through. He paused at the opening, heart pounding in his chest. Freedom was so close he could taste it, but what awaited them on the other side was anyone's guess.
"You first," Grimgor said, nudging him gently. "I'll follow."
Thane nodded, his heart heavy with gratitude. He'd never thought he'd owe his life to a man like Grimgor, but in this twisted world, allies came in the strangest of forms.
He squeezed through the gap, wincing as the rough stone scraped against his skin. He landed in a crouch on the other side, his muscles protesting the sudden movement. The corridor was empty, the flickering torches casting long, dancing shadows on the damp stone.
"Come on," he hissed, beckoning to Grimgor.
The big man followed, his bulk barely fitting through the makeshift opening. They crept down the corridor, their footsteps echoing like thunder in the oppressive silence. Thane's hand rested on the hilt of his dagger, a comforting weight against his thigh. He'd managed to keep it hidden during his capture, a small act of defiance that might just save their lives.
They rounded a corner and froze.
The sight that greeted them was straight out of a nightmare. Bodies lay strewn across the floor, limbs twisted at unnatural angles, their faces twisted in expressions of pure terror. Thick pools of blood stained the flagstones, the coppery scent making Thane's stomach churn.
"What in the...?" Grimgor breathed, his voice barely audible.
Thane didn't answer. He knew what had done this. He'd seen the like before, during the war. The Northmen were known for their brutality, but even they wouldn't leave a scene like this.
"We need to find a way out," he whispered, his voice hoarse. "And fast."
They moved cautiously through the carnage, avoiding the pools of blood as best they could. Thane's senses were on high alert, every creak and groan sending a shiver down his spine. They were being hunted, he could feel it.
They reached the end of the corridor and found themselves facing a heavy wooden door, the only way out of the dungeon. Thane tried the handle, but it was locked tight.
"Damn it!" he cursed, slamming his fist against the wood.
"Stand back," Grimgor growled, stepping forward. He took a deep breath and charged, shoulder first. The door splintered on impact, sending splinters flying.
They stumbled into the courtyard, blinking against the sudden onslaught of sunlight. The sight that greeted them was even worse than the carnage below.
The fortress was in ruins, bodies littering the ground like discarded dolls. And in the distance, a dark cloud of smoke rose into the sky, obscuring the sun.
"What in the seven hells happened here?" Grimgor rasped, his voice barely audible over the wind.
Thane didn't have an answer. All he knew was that they needed to get as far away from this place as possible.
"Come on," he said, grabbing Grimgor's arm. "We'll figure it out later. For now, we need to find horses and get out of here."
They sprinted across the courtyard, their footsteps echoing off the empty walls. As they rounded the corner of the keep, Thane skidded to a halt, his heart lodging in his throat.
Standing before them, blocking their path to freedom, was a sight that would haunt his dreams for the rest of his days.
A horde of the undead, their rotting flesh hanging from their bones, shambled towards them, moaning with a hunger that could never be sated.
And at their head, a figure cloaked in shadow, its eyes burning like twin embers of hate.
"Looks like our luck just ran out," Grimgor muttered, drawing his crude shiv.
Thane unsheathed his dagger, his hands trembling. He'd faced death many times before, but never like this.
"I suppose it was bound to happen eventually," he said, a grim smile twisting his lips. "Let's make them remember the names of Thane and Grimgor."
Together, they charged into the waiting jaws of the undead, their battle cries echoing through the ruins of what once was.
</details>
## Dark sci-fi stories
<details> <summary>Click to see spoiler</summary>
### USER
Write me the opening chapter of a dark and gritty Space Opera Sci-fi trilogy in the style of Frank Herbert and James S. A. Corey. Use third person personal and feature internal monologues of the characters. The setting is this: a colony ship heading out of the solar system has been pulled off course with no hope of getting to the planned destination. The people know this and society has broken down completely due to the grim prospects they are facing. The POV character for chapter 1 is an orphan.
### ASSISTANT
Chapter 1: Adrift in the Void
The cold, unforgiving vacuum of space pressed against the hull of the colony ship, *Pandora's Promise*, like an unyielding fist. Inside, in the dimly lit corridors of the once-thriving vessel, the echoes of laughter and hope had long since been replaced by the mournful symphony of groaning metal and the occasional, haunting scream.
Kira, a lean and resourceful seventeen-year-old orphan, moved silently through the labyrinthine passages, her ragged boots barely making a sound on the grime-coated floor. Her emerald eyes, sharp as a hawk's, scanned the shadows for any signs of danger. The once-pristine white walls were now a patchwork of rust and darkness, a testament to the slow decay that had claimed the ship since it had been unceremoniously yanked off course, like a toy snatched from a child's grasp by an unseen cosmic force.
*How did it come to this?* she thought, her mind drifting back to the faded memories of her childhood, before the chaos. Before the food ran out. Before the adults turned on each other like ravenous beasts.
The ship's artificial gravity had long since failed, leaving the inhabitants to cling to whatever handholds they could find. Kira had adapted quickly, her lithe form now as adept at navigating the zero-G environment as a fish in water. She reached out, her gloved hand finding purchase on a rusted pipe that jutted out from the wall. With a practiced grace, she swung herself around the corner, her makeshift knife, fashioned from a shard of broken glass and bound to a length of twisted wire, at the ready.
The corridor ahead was clear, for now. She allowed herself a brief moment of respite, her chest heaving as she sucked in the stale, recycled air that tasted of despair and decay. The oxygen filters were failing, just like everything else on this floating tomb.
A sudden clang reverberated through the ship, jolting her back to the present. Kira's heart leaped into her throat, her senses heightened to a razor's edge. She knew that sound all too well. Raiders.
*Damn it all to the black*, she cursed silently, her thoughts laced with the profanity that had become second nature in this harsh new world.
She ducked into a nearby maintenance alcove, her heart pounding like a war drum in her chest. The footsteps grew louder, accompanied by the harsh, guttural laughter of the marauders. They were getting closer.
*Think, Kira, think!* her inner voice urged, a faint echo of her long-dead mother's soothing tone.
Her eyes darted around the cramped space, searching for any means of escape. Above her, a grate in the ceiling panel caught her attention. It was a tight fit, but desperation was a powerful motivator.
With a silent prayer to whatever gods might still be listening, she sprang upwards, her fingers digging into the rusted metal. She strained with all her strength, her muscles screaming in protest as she pulled herself up and through the narrow opening. The sound of boots scraping against the floor below grew louder, accompanied by the harsh clang of makeshift weapons being drawn.
Kira held her breath, her heart threatening to burst from her chest, as the raiders passed beneath her. She could almost feel their fetid breath on her exposed neck.
*Just a little further*, she willed herself, inching along the narrow crawlspace.
Finally, she reached the end of the ventilation shaft and peered down into the relative safety of the next corridor. She let out a silent sigh of relief, her body trembling with the adrenaline coursing through her veins.
As she prepared to drop down, a hand clamped around her ankle, yanking her back with a feral snarl. Kira's cry of surprise was muffled by a filthy rag being stuffed into her mouth.
"Gotcha, little mouse," a gravelly voice growled in her ear, the hot breath reeking of rot and decay.
She thrashed and struggled, but it was no use. The raider's grip was like iron, his other hand now wrapped around her throat, cutting off her air supply. Dark spots danced before her eyes as she fought against the encroaching darkness.
Just when she thought all hope was lost, a sudden, blinding flash of light illuminated the cramped space, followed by a sickening wet thud. The vice-like grip around her neck and ankle released, and she tumbled to the floor below, gasping for air.
Coughing and wheezing, Kira looked up to see a hooded figure standing over her, a crackling energy weapon in hand. The stranger's face was obscured by a breather mask, but their eyes, a piercing shade of ice blue, seemed to bore into her very soul.
"Come on," the figure growled, extending a gloved hand. "You're not safe here."
Without hesitation, Kira took the offered hand, her heart pounding in her chest. She had no idea who this mysterious savior was, or what their motives might be, but one thing was certain: in the cold, unforgiving void of space, even the devil himself could be an angel.
</details>
<details> <summary>Click to see spoiler</summary>
### USER
Write me the opening chapter of a dark and gritty Sci-fi trilogy in the style of Frank Herbert and James S. A. Corey. Use third person personal and feature internal monologues of the characters. The POV character for chapter 1 is an astronaut living on a space station orbiting the earth. The day starts off normally, but then him the others notice the mushroom clouds and they have to face inevitability of the situation they are in.
### ASSISTANT
Chapter 1: Ashes to Oblivion
The cold, unforgiving vacuum of space pressed against the reinforced glass of the observation deck like an unseen titan, its icy tendrils reaching out to claim the fragile bubble of life that was the Kepler-92b Orbital Station. Inside, however, the air was a cocoon of warmth and artificially generated oxygen, a testament to humanity's insatiable desire to conquer the stars.
Commander Jameson "Jim" Harrow floated lazily in front of the window, his coffee cup held securely in the magnetic grip of the console beside him. The Earth, a blue and green marble, hung suspended in the inky void like a cosmic jewel, its beauty marred only by the thin, delicate veil of atmosphere that separated life from the cold emptiness of the void.
"Another day in paradise," he muttered to himself, the words barely audible over the hum of the station's life support systems.
Jim had been stationed on the Kepler-92b for the better part of two years, and the view still hadn't lost its ability to take his breath away. He'd seen the birth of countless stars, the death of ancient galaxies, and the dance of cosmic dust that made up the very fabric of the universe. But there was something about the sight of his home planet, so small and insignificant in the grand scheme of things, that never failed to humble him.
His reverie was interrupted by the sudden jolt of the station's alarms, a shrill, piercing wail that cut through the silence like a knife.
"What the hell?" he cursed, his heart rate spiking as he quickly unclipped himself from his seat and propelled himself towards the control console.
The display in front of him was a chaotic mess of flashing red lights and scrolling text, warning of multiple system failures and hull breaches.
"This can't be happening," he thought, his mind racing as he tried to make sense of the data in front of him.
"Houston, this is Kepler-92b, do you copy? We've got multiple system failures and hull breaches on all decks. Request immediate assistance."
Static was the only response.
Jim's stomach dropped as the horrifying realization dawned on him. They were alone, cut off from Earth and any hope of rescue, millions of miles from the nearest outpost.
"All hands, this is Commander Harrow. We have a situation. I repeat, we have a situation. All non-essential personnel, please report to the emergency shuttles immediately. We are initiating evacuation protocol Alpha-Delta-Bravo."
The intercom crackled to life, but the only response was the deafening silence of the void.
"Dammit," he cursed, his fingers flying over the console as he tried to manually override the system.
In the mess hall, Dr. Sarah "Sally" Chen was enjoying her breakfast of reconstituted eggs and synthetic bacon when the alarms sounded. She looked up, her dark eyes wide with fear, as the other crew members around her scrambled to their feet, their trays of food forgotten.
"What's happening?" one of the engineers, a young man named Thompson, asked, his voice trembling.
"I don't know," Sally replied, her own voice betraying the same fear that gripped her heart. "But we need to get to the shuttles, now."
She pushed herself off the table, her magnetic boots clinging to the floor as she made her way towards the emergency hatch. The others followed suit, their movements frantic and disorganized.
As they made their way through the narrow corridors of the station, the gravity generators began to fail, sending them careening into walls and each other. Sally felt her stomach lurch as she was thrown against the bulkhead, her helmet slamming into the metal with a sickening crunch.
"Sally!" Thompson's voice cut through the chaos, his face a mask of concern as he pulled her to her feet.
"I'm fine," she lied, wincing as she rubbed her temple. "Let's go."
They continued on, their progress slowed by the weightlessness and the debris that now floated freely through the corridors.
Meanwhile, in the station's control room, Jim was fighting a losing battle. The damage was too extensive, and the station's systems were failing faster than he could repair them.
"Houston, do you copy? This is Kepler-92b, we are losing control. Repeat, we are losing control."
Still, there was only silence.
He closed his eyes, his heart heavy with the weight of the decision he knew he had to make.
"Sally, if you can hear me, I'm sorry. I'm so sorry."
With a heavy heart, he initiated the station's self-destruct sequence, setting a course that would hopefully send the crippled vessel into the sun's unforgiving embrace.
"May God have mercy on our souls," he whispered, as the countdown began.
Back in the mess hall, Sally and the others had managed to make it to the emergency airlock. She could see the shuttles, their engines glowing a comforting red, just on the other side of the rapidly depressurizing chamber.
"Go!" she shouted, pushing Thompson and the others ahead of her. "I'll hold them off!"
"No, Sally, you can't!" he protested, but she shook her head, her resolve firm.
"I'm not leaving without you," she said, her voice steady despite the fear that threatened to consume her.
"I'll buy you some time," she promised, drawing her sidearm from its holster. "Just go."
Thompson hesitated for a moment, his eyes filled with tears, before nodding and disappearing through the hatch.
Sally took a deep breath, her heart pounding in her chest, as she turned to face the oncoming wave of debris.
"For Earth," she whispered, as the airlock slammed shut behind her.
In the control room, Jim watched as the hull breaches spread like a cancer, consuming the station one section at a time. He could feel the temperature dropping, the air growing thinner, as the vacuum of space began to claim its prize.
He closed his eyes, his thoughts drifting to his wife and daughter back on Earth, the faces of his crewmates flashing through his mind like a flickering slideshow.
"I'm sorry," he whispered, as the cold embrace of oblivion finally claimed him.
And then, there was nothing.
As the Kepler-92b Orbital Station was consumed by the unforgiving void, the Earth below continued to spin, oblivious to the tragedy that had just unfolded above it. The mushroom clouds that had once signaled the end of an era now nothing more than a distant memory, lost to the annals of time and the cold, uncaring vastness of space.
</details>
<details> <summary>Click to see spoiler</summary>
### USER
Write me the opening chapter of a dark and gritty Sci-fi trilogy in the style of Frank Herbert and James S. A. Corey. Use third person personal and feature internal monologues of the characters. The POV character for chapter 1 is a worker constructing a biodome when a terrible accident happens. Make the story very dark; no happy ending here...
### ASSISTANT
Chapter 1: The Crimson Dusk
The air was thick with the scent of molten steel and burnt ozone as Kaelen Voss, a third-generation terraformer, clung precariously to the skeletal frame of the colossal biodome. His magnetic boots barely held their grip on the slick surface, the sweat-soaked suit clinging to his wiry frame like a second skin. The unforgiving gaze of the dying sun, a distant ember in the ashen sky, cast long, menacing shadows across the barren landscape below. This was the edge of humanity's latest folly, the frontier of their insatiable hunger for survival.
Kaelen's thoughts drifted to the faded holo-image of his wife and daughter, tucked away in the worn-out pocket of his utility belt. Their smiles, once vibrant and full of life, now seemed like a cruel mockery of the desolate world he called home. He knew the risks when he signed up for this mission, but the promise of a better life, of green pastures and azure skies, had been too alluring to resist.
"Voss, you copy?" crackled the voice of his foreman, Jax, through the comms. "We're running behind schedule. Get your ass in gear!"
"Copy that, Jax," Kaelen grunted, forcing a semblance of confidence into his voice. "Just sealing the last joint. Be down in a jiff."
He activated the plasma welder, the brilliant blue-white arc dancing between the metal plates like a captive serpent. Sparks showered around him like a deadly rain, stinging his exposed visor. The hiss of metal on metal filled his ears, drowning out the deafening silence of the alien world.
As he finished the final weld, a sudden tremor shook the entire structure, sending a jolt of terror down his spine. The ground beneath him convulsed, as if the very planet itself had awakened from a millennial slumber, angry at the audacity of these insignificant intruders.
"What the hell was that?" Kaelen yelled, his heart pounding like a war drum in his chest.
Static filled the comms, followed by panicked screams and garbled warnings. The biodome, the fragile lifeline for the thousands of souls huddled below, groaned ominously, its once-sturdy frame twisting and buckling like a child's toy.
"Evacuate! Evacuate!" Jax's voice cut through the chaos, barely audible over the cacophony of destruction.
But it was too late.
With a deafening roar, the biodome's support struts snapped like brittle twigs, sending Kaelen hurtling into the crimson-tinged void. He watched in slow-motion horror as the world he knew disintegrated around him, a kaleidoscope of fire and debris swallowing everything in its path.
His mind raced, a torrent of memories flooding his consciousness. The first time he held his newborn daughter, her tiny fingers curled around his calloused thumb. The taste of real strawberries, a luxury reserved for the elite back on Earth. The sound of his wife's laughter, like the melody of a long-forgotten song.
And then, as the cold embrace of oblivion threatened to claim him, a single, haunting thought echoed through the recesses of his shattered psyche:
"I'm sorry."
---
Hours later, or perhaps it was days, Kaelen awoke to the sound of his own ragged breathing. His body was a symphony of agony, every nerve ending screaming in protest. He groaned, trying to move, but found himself pinned beneath a mountain of twisted metal and rubble. The once-familiar landscape had been reduced to an unrecognizable hellscape, the once-promising colony now a graveyard of shattered dreams and broken promises.
He fumbled for his wrist-mounted comm, his trembling fingers brushing against something warm and sticky. Blood. Lots of it.
"Mayday, this is Voss. Anyone... anyone out there?" he croaked, his voice barely a whisper.
Silence.
The realization hit him like a sledgehammer to the gut. He was alone. Trapped. And with his oxygen supply dwindling, he knew his time was running out.
As the darkness closed in, Kaelen Voss, once a proud terraformer, accepted his fate. This was not the hero's death he had envisioned, nor the legacy he wanted to leave behind. But in the cold, unforgiving void of space, the universe cared little for the dreams of insignificant creatures.
And so, as the last vestiges of life faded from his eyes, he whispered a final prayer to the stars above, a desperate plea for redemption and a chance to make things right.
But the cosmos, as always, remained indifferent.
</details>
<details> <summary>Click to see spoiler</summary>
### USER
Write me the opening chapter of a dark and gritty Sci-fi trilogy in the style of Frank Herbert and James S. A. Corey. Use third person personal and feature internal monologues of the characters. The POV character for chapter 1 is manning a listening station in deep space. What he hears over the radio chills him to the bone.
### ASSISTANT
Chapter 1: Whispers from the Abyss
The void of deep space was a cacophony of silence, punctuated only by the rhythmic hum of the ancient listening station. The man, known only as Kael, sat hunched over the console, his eyes glazed over from hours of staring into the abyssal blackness that swallowed the feeble light of distant stars. His calloused fingers danced across the worn-out keys, coaxing out the faintest of whispers from the cosmic static.
Kael had been stationed here for what felt like an eternity, his only companions the cold metal walls and the ever-present hum of the station's life support systems. He often wondered if anyone even remembered his existence, if his tireless vigilance was still deemed necessary by the faceless bureaucrats who had banished him to this forsaken outpost.
But orders were orders, and so he listened.
Suddenly, a blip on the monitor snapped him out of his reverie. A shiver ran down his spine as he leaned in closer, his heart pounding in his chest. It couldn't be... could it? He had heard the stories, of course, the whispers that circulated among the long-serving crew members. Tales of strange signals, garbled and indecipherable, that seemed to emanate from the very edge of known space. Most dismissed them as mere figments of overactive imaginations, the product of too much time spent alone in the cold emptiness.
But Kael knew better.
He had heard them before.
His trembling hand reached for the headset, the cold metal pressing against his sweat-soaked skin. He took a deep breath, steeling himself for what was to come.
"This is Outpost Alpha-Niner-Delta," he croaked into the microphone, his voice crackling with static. "Do you copy? Please repeat your last transmission."
Silence.
He exhaled a sigh of relief, chiding himself for his paranoia. It was probably just another damned cosmic anomaly, playing tricks on his tired mind. He was about to log the incident and return to his monotonous routine when it happened.
A voice, barely audible at first, pierced the silence like a knife through the fabric of reality itself. It was unlike anything he had ever heard before, a chilling symphony of alien tones and frequencies that seemed to resonate deep within his very soul.
"...not...alone..."
The words were distorted, as if they had traveled through countless light-years of empty space, twisted and warped by forces beyond human comprehension. But there was no mistaking their chilling intent.
Kael's blood ran cold as he realized the horrifying truth. They had found them.
His hands shook uncontrollably as he fumbled with the controls, desperately trying to send out a distress signal. But it was too late. The station's systems began to malfunction, flickering erratically before plunging into darkness. The only light now came from the dying embers of the emergency backup, casting eerie shadows across the cramped room.
The voice returned, louder this time, filling every corner of his mind with its otherworldly presence.
"...we...coming..."
Panic gripped him like a vice, squeezing the air from his lungs. He knew there was no escape, no refuge from whatever nightmarish entity had finally breached the fragile barrier between their world and the infinite void.
As the last vestiges of light faded away, Kael's final thought was a prayer, a desperate plea to whatever gods may still be listening.
"Please... let it be quick."
But the darkness only seemed to grow colder, and the whispers continued to echo through the endless expanse of space, a chilling reminder that they were never truly alone.
</details>
## Miscellaneous stories
<details> <summary>Click to see spoiler</summary>
### USER
Write me the opening chapter of a dark and gritty post-apocalyptic novel in the style of Cormac McCarthy. Use third person personal and feature internal monologues of the characters. The POV character for chapter 1 is a technician in a nuclear bunker 18 months after the war. The 12 inhabitants of the bunker have been expecting to start to be able to leave soon, but it has just been discovered that at least where they are, salted cobalt bombs were used. It's clear now that either all 12 will die when the resources run out and their is barely enough for 6 to make it to 5 years and have any hope of leaving. After 18 months of calm everything breaks lose today.
### ASSISTANT
In the bowels of the earth, where the last vestiges of humanity clung to a semblance of life, the technician sat hunched over his console, the green glow casting eerie shadows upon his gaunt visage. The hum of machinery, once a comforting lullaby, now grated on his frayed nerves like the wailing of a thousand banshees. The air was stale, thick with the stench of sweat and despair, a constant reminder of their entombment.
He had been tasked with the Sisyphean duty of monitoring the Geiger counter, a futile exercise in hope masquerading as purpose. The needle had remained stubbornly pinned in the red for as long as he could remember, a crimson sentinel of their impending doom. Yet, he persisted, driven by the faintest ember of optimism that one day, by some miracle, it would dip into the safe zone, and they could emerge from their subterranean prison.
But today, that ember was snuffed out.
The intercom crackled to life, the static a harbinger of the tidings to come. "Attention, all personnel. This is Commander Reynolds. Please report to the briefing room immediately."
The technician's heart sank like a lead weight in his chest. He knew what this meant. They had all known it was a matter of when, not if. The resources were dwindling, the rations growing thinner by the day. But to hear it confirmed, to have the cold, hard truth spoken aloud, was a blow he was ill-prepared to weather.
He rose from his chair, his joints protesting like rusty hinges, and shuffled down the dimly lit corridor, the echoes of his footsteps the only company in the otherwise tomb-like silence. The other inhabitants of the bunker, twelve souls in total, were already gathered, their faces etched with a mixture of fear and resignation.
Commander Reynolds stood at the front, his once-proud bearing now stooped with the weight of the world upon his shoulders. He cleared his throat, his voice raspy from disuse. "I won't mince words. The latest readings... they're not good. We've been hit with salted cobalt bombs. The surface... it's uninhabitable for centuries, if not longer."
A collective gasp reverberated through the room, followed by a cacophony of whispers and sobs. The technician remained stoic, his mind numb to the news. He had suspected as much, had felt it in his bones for weeks now.
"We have enough supplies for six more years, at most. But that's if we ration strictly and... and..." The commander trailed off, unable to finish the sentence.
The implication hung heavy in the air, suffocating in its implications. They were all aware of the math, the grim calculus of survival. Twelve mouths to feed, enough for six. The unspoken question loomed like a specter in the room: who would be the lucky ones to draw the short straw?
As the others erupted into panicked whispers and heated arguments, the technician retreated into his own thoughts. He thought of his wife, his children, their faces now little more than faded memories. He wondered if they had been spared this slow, agonizing end, if they had been vaporized in the initial blast, their souls released to the ether rather than condemned to this subterranean purgatory.
A hand on his shoulder jolted him from his reverie. He turned to find Dr. Thompson, the bunker's sole medical officer, her eyes brimming with unshed tears. "We need to talk," she mouthed, her voice barely audible over the din.
He nodded, a sinking feeling settling in his gut. He knew what she was going to say, had suspected it since the moment the news broke. They had discussed it in hushed whispers late at night, when they thought the others couldn't hear. Mercy, they had called it, a euphemism for the unthinkable.
As they slipped away from the chaos, the technician couldn't help but wonder if this was truly the end of the world, or merely the beginning of a new, far darker chapter.
</details>
<details> <summary>Click to see spoiler</summary>
### USER
Write me the opening chapter of a historical mystery novel in the style of Arturo Pérez-Reverte and Umberto Eco. The story starts off in California during the 1948 Gold Rush, before moving to London in the mid 1970s. There should be a mysterious 'occult' item that connects these two time periods.
### ASSISTANT
Chapter One: The Golden Veil
The year was 1848, and the air in California was thick with the feverish scent of gold. Men from all corners of the world had converged upon the once-tranquil landscape, their eyes glazed with the insatiable lust for fortune. Amongst these ravenous prospectors, there was one who sought not the glittering metal, but rather, a treasure far more elusive and enigmatic.
His name was Alaric Blackwood, a man of refined taste and enigmatic origins. He had arrived in San Francisco aboard the clipper ship 'The Seraphim's Song', his pockets lined with letters of recommendation from the most esteemed scholars of Europe. Armed with an unyielding determination and a keen intellect, he ventured into the untamed wilderness, guided by whispers of an ancient artifact rumored to possess otherworldly powers.
The artifact in question was known as 'The Oculus Aureus', or 'The Golden Eye', a relic said to have been forged by the hands of the Aztec god Quetzalcoatl himself. Legend spoke of its ability to peer through the veil of time and space, granting its possessor unimaginable knowledge and power. It was this very object that had drawn Alaric to the New World, leaving behind the hallowed halls of Oxford and the comforts of his aristocratic upbringing.
As he traversed the treacherous terrain, Alaric encountered a motley crew of misfits and fortune-seekers, each with their own tales of woe and wonder. Among them was a grizzled prospector named Silas, who claimed to have stumbled upon a cryptic map hidden within the pages of an ancient Spanish manuscript. The map, he insisted, would lead them to the very heart of the Sierra Nevada, where the Oculus Aureus lay hidden, guarded by the vengeful spirits of those who had dared to covet it before.
Intrigued by the old man's tale, Alaric agreed to join forces with Silas and his band of unlikely allies. Together, they embarked on a perilous journey fraught with danger and deceit, their every step dogged by a shadowy cabal known only as 'The Order of the Serpent's Coil'. This clandestine society, it seemed, would stop at nothing to claim the Oculus for their own nefarious ends.
Fast forward to London, 1975.
The cobblestone streets of Covent Garden were alive with the sights and sounds of a city steeped in history and mystery. Amidst the bustling throng of tourists and street performers, a figure in a worn tweed coat and fedora hat moved with a stealthy grace, his keen eyes scanning the rows of antiquities that lined the market stalls.
This was Archibald 'Archie' Fletcher, a seasoned private investigator with a penchant for the arcane and the unexplained. A former MI6 operative turned reluctant treasure hunter, he had made a name for himself as the man to call when all other avenues had been exhausted.
It was on this particular autumnal day that Archie's life would take an unexpected turn. A cryptic message, delivered by a trembling messenger boy, led him to a dimly-lit pawnshop tucked away in a narrow alleyway. Inside, he found himself face-to-face with an object that would forever change the course of his existence: a tarnished gold amulet, bearing the unmistakable emblem of a coiled serpent devouring its own tail.
The shopkeeper, a wizened old man with a twinkle in his eye, recognized the symbol at once. "Ah, Mr. Fletcher," he cackled, "I've been expecting you. This here trinket, it's been waiting for someone like you for a very long time."
With a trembling hand, Archie reached out to take the amulet, a shiver of recognition coursing through his veins. He knew, deep within his bones, that he had stumbled upon a piece of the very puzzle that had eluded his predecessors for centuries.
And so began the race against time, as Archie delved headfirst into a labyrinthine web of secrets and lies, spanning continents and centuries. From the dusty archives of the British Museum to the catacombs beneath the Vatican, he would uncover the truth about the Oculus Aureus and the dark forces that sought to harness its power.
But little did he know, the key to unlocking the artifact's true potential lay hidden in the journals of a long-dead adventurer, whose footsteps he was destined to retrace. And as the threads of past and present began to intertwine, Archie would find himself drawn inexorably towards a confrontation with an enemy far more terrifying than any he could have ever imagined.
For the Order of the Serpent's Coil had not been vanquished, and they would stop at nothing to reclaim what they believed was rightfully theirs.
</details>
<details> <summary>Click to see spoiler</summary>
### USER
Write me the opening chapter of a dark and gritty historical novel set during the battle of the Bulge. Write in first person. The POV character for Chapter 1 is a young German officer trying to carry his wounded commander back to the deep defensive line after the bunker he was in was overrun. He should have a flashback to the moment it was overrun during the middle of the story. The story should not end well for either of them.
### ASSISTANT
Chapter 1: The Frozen Hell
The biting cold of the Ardennes Forest gnawed at my bones, each step a Herculean effort through the knee-deep snow. My breath came in ragged gasps, fogging up my frostbitten visor as I stumbled forward, my arms trembling under the weight of my fallen commander. The once pristine white landscape was now a grotesque tapestry of crimson and charcoal, punctuated by the twisted, lifeless forms of comrades and foes alike. The acrid stench of gunpowder and decaying flesh hung heavy in the air, a constant reminder of the carnage that had befallen us.
"Kommissar, wir müssen sie zurücklassen!" a voice croaked from behind me, desperation etched into its trembling timbre.
"Nein!" I growled through clenetted teeth, my jaw aching from the effort. "Wir schaffen das!" We can do this. It was a lie, one I told myself as much as I did to the terrified young soldier who had become my makeshift stretcher-bearer. Leaving Oberleutnant Schmidt behind was not an option. He was our last semblance of hope, our fragile lifeline to the Fatherland.
A deafening explosion rocked the earth beneath us, sending us sprawling into the frigid embrace of the snow. I felt a searing pain in my left leg, followed by a warm, wet sensation trickling down my calf. Ignoring the agony, I clawed my way back to my feet, my numb fingers fumbling for my Kar98k. The world around me swirled, a disorienting dance of white and grey.
"Sind Sie in Ordnung, Leutnant?" the voice asked, concern lacing its tremors.
"Ja, ja," I lied, gritting my teeth as I hauled myself upright, my vision slowly clearing. "Lass uns gehen." Let's go.
We resumed our arduous trek, the distant rumble of artillery and the shrill whistle of incoming shells our macabre symphony. The trees around us, once majestic sentinels, now stood as skeletal specters, their limbs severed and twisted by the relentless barrage. The once-dense foliage offered little cover, leaving us exposed to the merciless onslaught from above.
As we staggered on, my mind drifted back to the moment it all went wrong.
The bunker had been our fortress, a concrete bastion against the encroaching enemy. The air inside was thick with the stench of sweat, fear, and cordite, but it was a welcome reprieve from the frozen hell outside. The men huddled together, their eyes glassy with exhaustion but burning with determination. Oberleutnant Schmidt paced the cramped quarters, barking orders into the static-filled field radio, his voice a beacon of resolve in the chaos.
Then, without warning, the ground shook violently, and the world above us came crashing down. The deafening roar of the explosion was followed by an oppressive silence, punctuated only by the cries of the wounded and the groans of the dying.
I awoke to darkness, my ears ringing and my head throbbing. Disoriented, I fumbled for my flashlight, its feeble beam illuminating a scene straight from the depths of Hades. The once-impenetrable walls of our sanctuary lay in ruins, the twisted metal and concrete tombs of my comrades intermingled with the earth.
"Schmidt!" I croaked, my voice hoarse from the choking dust.
A weak groan answered my call, guiding me through the debris like a siren's song. I dug frantically, my hands raw and bleeding, until I found him, pinned beneath a fallen beam. His uniform was soaked in crimson, his breathing shallow and labored.
"Leutnant... wir... müssen..." he gasped, his eyes glassing over.
"Nein, Oberleutnant, wir schaffen das!" I shouted, adrenaline coursing through my veins. With a final, Herculean effort, I freed him from his earthen prison, hoisting him onto my shoulders. "Wir kommen zurück." We're going back.
The present snapped back into focus as a searing pain ripped through my chest, followed by an icy numbness that spread like a malevolent tendril. I stumbled, my vision blurring as I looked down to find a gaping hole where my heart should have been. A crimson bloom blossomed on my grey greatcoat, staining the pristine snow beneath me.
"Nein... nein, nicht jetzt..." I whispered, my legs giving out beneath me.
As I collapsed, the world faded to black, the cacophony of war replaced by an eerie silence. In my final moments, I saw them. The ghostly silhouettes of my fallen comrades, their spectral forms beckoning me to join them in the eternal embrace of Valhalla.
"Wir haben es versucht, mein Führer," I whispered, my voice barely a whisper. We tried.
The last thing I felt was the cold, unforgiving embrace of the snow, as the darkness claimed me forever.
</details>
<details> <summary>Click to see spoiler</summary>
### USER
Write me a short story about the fictional character of Anton Chigurh. It should be written in third person personal and feature internal monologues of his deranged world view. The time period is around 2 years before the events in Cormac McCarthy's 2005 novel 'No Country for Old Men'. It should be set in Mexico, be very dark, show Anton's still developing obsession with "chance" and feature a grenade. The story should start with him waiting in a bar for instructions on the "job" he is to perform.
### ASSISTANT
Anton Chigurh sat motionless in the dimly lit corner of the dingy Mexican cantina, his eyes scanning the room with the cold, reptilian gaze of a predator. The air was thick with the mingling scents of stale cigarette smoke, cheap tequila, and sweat-soaked leather. He was a man out of place, his clean-shaven head and pressed black suit a stark contrast to the sea of unkempt beards and tattered denim that surrounded him. Yet, not a single patron dared to meet his gaze, as if they could sense the malevolence that radiated from his very being.
He fingered the worn coin in his pocket, the smooth surface a comforting reminder of the order he sought to impose on the chaotic world around him. Chance, the great equalizer, the only force he truly believed in. It was the coin that would decide the fate of the man he was here to meet, just as it had decided the fates of so many others before him.
As the minutes ticked by, the tension in the room grew palpable, like a coiled rattlesnake poised to strike. Anton welcomed it, feeding off the fear that hung heavy in the air like a suffocating blanket. It was in these moments of anticipation that he felt most alive, the thrill of the unknown coursing through his veins like a potent drug.
Finally, the door creaked open, and a portly man with greasy hair and a gold tooth sauntered in, his eyes darting nervously from one face to the next. Anton's gaze locked onto him like a laser, and he knew without a doubt that this was his mark.
The man made his way to the bar, his hands trembling as he ordered a shot of tequila. Anton remained still, his patience unwavering as a statue carved from ice. He would wait, he would watch, and when the time was right, he would strike.
As the minutes dragged on, the man's nerves began to fray, his Adam's apple bobbing like a fish out of water with each gulp of the fiery liquid. Anton could almost taste the fear emanating from him, a bitter yet intoxicating aroma that only served to heighten his own anticipation.
At last, the bartender slid a folded piece of paper across the sticky countertop, and the man snatched it up like a drowning man grasping for a lifeline. Anton's moment had arrived.
He rose from his seat, his movements fluid and graceful, like a panther stalking its prey. The room seemed to slow to a crawl as he approached the bar, the clinking of glasses and mumbled conversations fading into a distant hum. The man's eyes widened in terror as he felt the cold steel of Anton's silenced pistol press against the base of his skull.
"Toss me the keys," Anton growled in a voice that was little more than a whisper. The man complied without hesitation, his hands shaking so violently that the keys clattered to the floor. Anton stooped down to retrieve them, never once breaking eye contact.
"Now, let's take a little walk," he said, his tone deceptively calm.
The man stumbled forward, his legs threatening to buckle beneath him. Anton followed closely behind, the barrel of the gun never wavering from the base of his skull. They made their way through the crowded cantina, the patrons parting like the Red Sea before them, their eyes fixed on the floor in a desperate attempt to avoid drawing attention to themselves.
Once they were outside, Anton shoved the man towards a battered pickup truck parked in the alleyway. He opened the passenger door and gestured for the man to get in, his movements precise and efficient. The man complied, his breath coming in ragged gasps.
As they drove through the winding backstreets of the Mexican border town, Anton's thoughts turned to the task at hand. The job was simple enough: retrieve a shipment of heroin that had gone missing, and eliminate anyone who had dared to cross his employer's path. But it was the method of execution that intrigued him most.
He reached into his coat pocket and retrieved a worn leather pouch, the weight of its contents a familiar comfort in his hand. Inside lay a shiny silver grenade, its pin still intact. It was a new addition to his arsenal, a gift from a former associate who had failed to appreciate the beauty of chance. Anton had yet to test it out, but he had a feeling that tonight would be the perfect opportunity.
As they neared their destination, a decrepit warehouse on the outskirts of town, Anton's pulse quickened with anticipation. He could feel the adrenaline coursing through his veins, the thrill of the hunt reaching a fever pitch.
He pulled the truck to a stop a few blocks away and motioned for the man to get out. They approached the warehouse on foot, the sound of their footsteps echoing off the empty buildings like gunshots in the still night air.
As they neared the entrance, Anton could hear the faint sounds of laughter and music drifting from within. He smiled, his thin lips stretching into a chilling grin. It seemed his prey was enjoying their final moments, blissfully unaware of the fate that awaited them.
He signaled for the man to open the door, and they slipped inside, the stench of rotting produce and stale cigarette smoke assaulting their senses. Anton's eyes adjusted to the dim lighting, and he took in the scene before him: a dozen or so men, heavily armed and surrounded by stacks of neatly packaged heroin.
He reached into his pocket and flipped the coin, the sound of it clinking against the metal of his gun's chamber the only sound in the otherwise deafening silence. Heads snapped in their direction, but it was too late.
In a blur of motion, Anton had the grenade in his hand, the pin pulled and the lever depressed. He tossed it into the center of the room, the men's faces contorting in terror as they realized their fate.
The explosion was deafening, the force of it knocking him back against the wall. When the smoke cleared, the warehouse was reduced to a smoldering ruin, the screams of the dying men a symphony of agony that filled his ears.
He walked through the carnage, his boots crunching on shattered glass and charred debris. The man he had brought with him lay in a pool of his own blood, a look of pure terror etched onto his face. Anton knelt down beside him, his expression unreadable.
"Tell me," he said, his voice barely audible over the ringing in his ears, "do you still believe in chance?"
The man's eyes flickered, and a bubbling sound escaped his lips, the last vestiges of life slipping away. Anton chuckled to himself, the sound devoid of any trace of humor.
"I didn't think so," he muttered, before standing up and disappearing into the night, the taste of chaos and destruction still lingering on his lips.
</details>
Big thanks to @chargoddard for creating [Mergekit](https://github.com/arcee-ai/mergekit)! |
DUAL-GPO/phi-2-irepo-chatml-i0 | DUAL-GPO | 2024-05-18T23:04:50Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"phi",
"alignment-handbook",
"generated_from_trainer",
"trl",
"dpo",
"custom_code",
"dataset:HuggingFaceH4/ultrafeedback_binarized",
"base_model:DUAL-GPO/phi-2-sft-lora-ultrachat-merged",
"base_model:adapter:DUAL-GPO/phi-2-sft-lora-ultrachat-merged",
"license:apache-2.0",
"region:us"
] | null | 2024-05-18T16:02:19Z | ---
license: apache-2.0
library_name: peft
tags:
- alignment-handbook
- generated_from_trainer
- trl
- dpo
- generated_from_trainer
base_model: DUAL-GPO/phi-2-sft-lora-ultrachat-merged
datasets:
- HuggingFaceH4/ultrafeedback_binarized
model-index:
- name: phi-2-irepo-chatml-i0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# phi-2-irepo-chatml-i0
This model is a fine-tuned version of [DUAL-GPO/phi-2-sft-lora-ultrachat-merged](https://huggingface.co/DUAL-GPO/phi-2-sft-lora-ultrachat-merged) on the HuggingFaceH4/ultrafeedback_binarized dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 6
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 4
- total_train_batch_size: 24
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.2 |
anuj42/bert-finetuned-ner | anuj42 | 2024-05-18T22:57:06Z | 108 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-05-18T22:21:09Z | ---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
model-index:
- name: bert-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0709
- Precision: 0.9313
- Recall: 0.9382
- F1: 0.9348
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|
| 0.0738 | 1.0 | 1756 | 0.0709 | 0.9313 | 0.9382 | 0.9348 |
### Framework versions
- Transformers 4.40.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
mcding/GPT2-Small-PKU-Harm-10K-Reward | mcding | 2024-05-18T22:50:42Z | 134 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-18T22:50:31Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
yzhuang/Meta-Llama-3-8B-Instruct_fictional_arc_challenge_German_v2 | yzhuang | 2024-05-18T22:39:46Z | 9 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"generated_from_trainer",
"conversational",
"dataset:generator",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:finetune:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-17T07:44:41Z | ---
license: llama3
base_model: meta-llama/Meta-Llama-3-8B-Instruct
tags:
- trl
- sft
- generated_from_trainer
datasets:
- generator
model-index:
- name: Meta-Llama-3-8B-Instruct_fictional_arc_challenge_German_v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/yufanz/autotree/runs/7283910327.75521-df0dd9e4-b029-4f7b-b0df-488a352215cc)
# Meta-Llama-3-8B-Instruct_fictional_arc_challenge_German_v2
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 36
### Training results
### Framework versions
- Transformers 4.41.0
- Pytorch 2.1.0a0+32f93b1
- Datasets 2.19.1
- Tokenizers 0.19.1
|
ae-aydin/Llama-3-8B-Instruct-Medical-QLoRA | ae-aydin | 2024-05-18T22:37:42Z | 8 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"en",
"dataset:medalpaca/medical_meadow_medical_flashcards",
"dataset:medalpaca/medical_meadow_wikidoc",
"dataset:medalpaca/medical_meadow_wikidoc_patient_information",
"dataset:medalpaca/medical_meadow_medqa",
"dataset:lavita/MedQuAD",
"dataset:Mreeb/Dermatology-Question-Answer-Dataset-For-Fine-Tuning",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"region:us"
] | null | 2024-05-17T00:28:23Z | ---
license: llama3
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: meta-llama/Meta-Llama-3-8B-Instruct
model-index:
- name: results
results: []
datasets:
- medalpaca/medical_meadow_medical_flashcards
- medalpaca/medical_meadow_wikidoc
- medalpaca/medical_meadow_wikidoc_patient_information
- medalpaca/medical_meadow_medqa
- lavita/MedQuAD
- Mreeb/Dermatology-Question-Answer-Dataset-For-Fine-Tuning
language:
- en
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-3-8B-Instruct-Medical-QLoRA
This model is a adapter for [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct), finetuned on a subset of given datasets.
It achieves the following results on the evaluation set:
- Loss: 1.1646
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.217 | 0.0591 | 20 | 1.5876 |
| 1.4821 | 0.1182 | 40 | 1.3649 |
| 1.3217 | 0.1773 | 60 | 1.2501 |
| 1.2392 | 0.2363 | 80 | 1.2201 |
| 1.1963 | 0.2954 | 100 | 1.2075 |
| 1.1829 | 0.3545 | 120 | 1.1997 |
| 1.2229 | 0.4136 | 140 | 1.1917 |
| 1.2016 | 0.4727 | 160 | 1.1868 |
| 1.1753 | 0.5318 | 180 | 1.1831 |
| 1.216 | 0.5908 | 200 | 1.1790 |
| 1.1831 | 0.6499 | 220 | 1.1761 |
| 1.1941 | 0.7090 | 240 | 1.1730 |
| 1.2566 | 0.7681 | 260 | 1.1702 |
| 1.1908 | 0.8272 | 280 | 1.1681 |
| 1.1586 | 0.8863 | 300 | 1.1665 |
| 1.1956 | 0.9453 | 320 | 1.1646 |
### Framework versions
- PEFT 0.11.0
- Transformers 4.40.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1 |
Wsassi/llama_3_8b_instruct_function_calling | Wsassi | 2024-05-18T22:25:37Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-18T22:25:31Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
MisterRaven006/SweetNeural-7B | MisterRaven006 | 2024-05-18T22:22:11Z | 8 | 0 | transformers | [
"transformers",
"safetensors",
"gguf",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:KatyTheCutie/LemonadeRP-4.5.3",
"base_model:merge:KatyTheCutie/LemonadeRP-4.5.3",
"base_model:mlabonne/NeuralBeagle14-7B",
"base_model:merge:mlabonne/NeuralBeagle14-7B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-15T01:44:17Z | ---
base_model:
- KatyTheCutie/LemonadeRP-4.5.3
- mlabonne/NeuralBeagle14-7B
library_name: transformers
tags:
- mergekit
- merge
license: apache-2.0
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [KatyTheCutie/LemonadeRP-4.5.3](https://huggingface.co/KatyTheCutie/LemonadeRP-4.5.3)
* [mlabonne/NeuralBeagle14-7B](https://huggingface.co/mlabonne/NeuralBeagle14-7B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: mlabonne/NeuralBeagle14-7B
layer_range: [0, 32]
- model: KatyTheCutie/LemonadeRP-4.5.3
layer_range: [0, 32]
merge_method: slerp
base_model: mlabonne/NeuralBeagle14-7B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
``` |
Wsassi/Llama_3_8b_function-calling | Wsassi | 2024-05-18T22:21:13Z | 1 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"region:us"
] | null | 2024-05-18T10:52:26Z | ---
license: llama3
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: meta-llama/Meta-Llama-3-8B-Instruct
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/wala_sassi/finetuning-llama3/runs/cwcylgwl)
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/wala_sassi/finetuning-llama3/runs/cwcylgwl)
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/wala_sassi/finetuning-llama3/runs/cwcylgwl)
# results
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 1
### Training results
### Framework versions
- PEFT 0.11.1
- Transformers 4.41.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1 |
nickrwu/bigbird-roberta-large | nickrwu | 2024-05-18T22:14:55Z | 109 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"multiple-choice",
"generated_from_trainer",
"base_model:LIAMF-USP/roberta-large-finetuned-race",
"base_model:finetune:LIAMF-USP/roberta-large-finetuned-race",
"license:mit",
"endpoints_compatible",
"region:us"
] | multiple-choice | 2024-05-18T22:14:24Z | ---
license: mit
base_model: LIAMF-USP/roberta-large-finetuned-race
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: bigbird-roberta-large
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bigbird-roberta-large
This model is a fine-tuned version of [LIAMF-USP/roberta-large-finetuned-race](https://huggingface.co/LIAMF-USP/roberta-large-finetuned-race) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6094
- Accuracy: 0.1976
- F1: 0.1757
- Precision: 0.1893
- Recall: 0.1911
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 1.6272 | 0.3233 | 1200 | 1.6094 | 0.2082 | 0.1431 | 0.2007 | 0.1996 |
| 1.6218 | 0.6466 | 2400 | 1.6094 | 0.2117 | 0.1340 | 0.1876 | 0.1998 |
| 1.6235 | 0.9698 | 3600 | 1.6094 | 0.2104 | 0.1752 | 0.2005 | 0.2015 |
| 1.617 | 1.2931 | 4800 | 1.6094 | 0.2088 | 0.1956 | 0.2037 | 0.2028 |
| 1.61 | 1.6164 | 6000 | 1.6094 | 0.2091 | 0.1606 | 0.2127 | 0.2024 |
| 1.6126 | 1.9397 | 7200 | 1.6094 | 0.2108 | 0.1796 | 0.1965 | 0.2011 |
| 1.6174 | 2.2629 | 8400 | 1.6094 | 0.2095 | 0.1833 | 0.2036 | 0.2024 |
| 1.6125 | 2.5862 | 9600 | 1.6094 | 0.2097 | 0.1847 | 0.1963 | 0.2016 |
| 1.6192 | 2.9095 | 10800 | 1.6094 | 0.1976 | 0.1757 | 0.1893 | 0.1911 |
### Framework versions
- Transformers 4.40.2
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
apwic/sentiment-lora-r2a2d0.05-1 | apwic | 2024-05-18T22:11:34Z | 0 | 0 | null | [
"tensorboard",
"generated_from_trainer",
"id",
"base_model:indolem/indobert-base-uncased",
"base_model:finetune:indolem/indobert-base-uncased",
"license:mit",
"region:us"
] | null | 2024-05-18T21:38:24Z | ---
language:
- id
license: mit
base_model: indolem/indobert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: sentiment-lora-r2a2d0.05-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sentiment-lora-r2a2d0.05-1
This model is a fine-tuned version of [indolem/indobert-base-uncased](https://huggingface.co/indolem/indobert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3638
- Accuracy: 0.8446
- Precision: 0.8193
- Recall: 0.7951
- F1: 0.8055
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 30
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.5663 | 1.0 | 122 | 0.5216 | 0.7293 | 0.6677 | 0.6510 | 0.6572 |
| 0.5149 | 2.0 | 244 | 0.5134 | 0.7243 | 0.6758 | 0.6899 | 0.6810 |
| 0.4925 | 3.0 | 366 | 0.4821 | 0.7569 | 0.7055 | 0.6980 | 0.7014 |
| 0.4608 | 4.0 | 488 | 0.4654 | 0.7644 | 0.7150 | 0.7083 | 0.7114 |
| 0.4493 | 5.0 | 610 | 0.4600 | 0.7569 | 0.7126 | 0.7305 | 0.7193 |
| 0.4257 | 6.0 | 732 | 0.4307 | 0.7870 | 0.7433 | 0.7318 | 0.7369 |
| 0.4178 | 7.0 | 854 | 0.4181 | 0.7970 | 0.7552 | 0.7614 | 0.7581 |
| 0.3977 | 8.0 | 976 | 0.3972 | 0.8070 | 0.7687 | 0.7560 | 0.7617 |
| 0.3946 | 9.0 | 1098 | 0.3937 | 0.8145 | 0.7779 | 0.7663 | 0.7716 |
| 0.3762 | 10.0 | 1220 | 0.3874 | 0.8246 | 0.7995 | 0.7584 | 0.7738 |
| 0.3727 | 11.0 | 1342 | 0.3787 | 0.8321 | 0.8014 | 0.7837 | 0.7915 |
| 0.3626 | 12.0 | 1464 | 0.3750 | 0.8371 | 0.8059 | 0.7947 | 0.7999 |
| 0.359 | 13.0 | 1586 | 0.3728 | 0.8296 | 0.8066 | 0.7644 | 0.7803 |
| 0.3488 | 14.0 | 1708 | 0.3709 | 0.8296 | 0.8049 | 0.7669 | 0.7816 |
| 0.3445 | 15.0 | 1830 | 0.3667 | 0.8421 | 0.8131 | 0.7983 | 0.8050 |
| 0.3344 | 16.0 | 1952 | 0.3656 | 0.8421 | 0.8142 | 0.7958 | 0.8040 |
| 0.3339 | 17.0 | 2074 | 0.3654 | 0.8396 | 0.8128 | 0.7890 | 0.7992 |
| 0.3357 | 18.0 | 2196 | 0.3638 | 0.8421 | 0.8154 | 0.7933 | 0.8029 |
| 0.3357 | 19.0 | 2318 | 0.3646 | 0.8421 | 0.8154 | 0.7933 | 0.8029 |
| 0.3359 | 20.0 | 2440 | 0.3638 | 0.8446 | 0.8193 | 0.7951 | 0.8055 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.15.2
|
long292/apply_back_translation_model_v5_2k | long292 | 2024-05-18T22:07:57Z | 107 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"mbart",
"text2text-generation",
"generated_from_trainer",
"base_model:vinai/bartpho-syllable-base",
"base_model:finetune:vinai/bartpho-syllable-base",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-05-18T22:07:42Z | ---
base_model: vinai/bartpho-syllable-base
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: apply_back_translation_model_v5_2k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# apply_back_translation_model_v5_2k
This model is a fine-tuned version of [vinai/bartpho-syllable-base](https://huggingface.co/vinai/bartpho-syllable-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8175
- Bleu: 9.3599
- Gen Len: 17.9607
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:-------:|
| 1.9856 | 1.0 | 11627 | 1.8175 | 9.3599 | 17.9607 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
|
kataragi/controlnet_canny | kataragi | 2024-05-18T22:04:33Z | 0 | 2 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-05-18T21:58:33Z | ---
license: creativeml-openrail-m
---
|
AliSaadatV/virus_pythia_160_1024_2d_representation_GaussianPlusCE | AliSaadatV | 2024-05-18T21:59:25Z | 131 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"generated_from_trainer",
"base_model:EleutherAI/pythia-160m",
"base_model:finetune:EleutherAI/pythia-160m",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-18T21:59:12Z | ---
license: apache-2.0
base_model: EleutherAI/pythia-160m
tags:
- generated_from_trainer
model-index:
- name: virus_pythia_160_1024_2d_representation_GaussianPlusCE
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# virus_pythia_160_1024_2d_representation_GaussianPlusCE
This model is a fine-tuned version of [EleutherAI/pythia-160m](https://huggingface.co/EleutherAI/pythia-160m) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.40.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
mrm8488/tinyllama-ft-en-es-rag-gguf-q4_k_m | mrm8488 | 2024-05-18T21:57:22Z | 11 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/tinyllama-bnb-4bit",
"base_model:quantized:unsloth/tinyllama-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-18T21:57:19Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
base_model: unsloth/tinyllama-bnb-4bit
---
# Uploaded model
- **Developed by:** mrm8488
- **License:** apache-2.0
- **Finetuned from model :** unsloth/tinyllama-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mrm8488/tinyllama-ft-en-es-rag-gguf-q8_0 | mrm8488 | 2024-05-18T21:56:04Z | 4 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/tinyllama-bnb-4bit",
"base_model:quantized:unsloth/tinyllama-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-18T21:55:26Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
base_model: unsloth/tinyllama-bnb-4bit
---
# Uploaded model
- **Developed by:** mrm8488
- **License:** apache-2.0
- **Finetuned from model :** unsloth/tinyllama-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
nsugianto/tblstructrecog_finetuned_tbltransstrucrecog_v1_s1_109s | nsugianto | 2024-05-18T21:50:45Z | 72 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"table-transformer",
"object-detection",
"generated_from_trainer",
"base_model:microsoft/table-transformer-structure-recognition",
"base_model:finetune:microsoft/table-transformer-structure-recognition",
"license:mit",
"endpoints_compatible",
"region:us"
] | object-detection | 2024-05-18T15:13:29Z | ---
license: mit
base_model: microsoft/table-transformer-structure-recognition
tags:
- generated_from_trainer
model-index:
- name: tblstructrecog_finetuned_tbltransstrucrecog_v1_s1_109s
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tblstructrecog_finetuned_tbltransstrucrecog_v1_s1_109s
This model is a fine-tuned version of [microsoft/table-transformer-structure-recognition](https://huggingface.co/microsoft/table-transformer-structure-recognition) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1500
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.41.0.dev0
- Pytorch 2.0.1
- Datasets 2.18.0
- Tokenizers 0.19.1
|
ochafik/Nous-Hermes-2-Mixtral-8x7B-DPO-GGUF | ochafik | 2024-05-18T21:50:35Z | 0 | 1 | null | [
"gguf",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-05-18T17:06:28Z | ---
license: apache-2.0
---
This is a reconversion / quantization of https://huggingface.co/NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO
There was a breaking change in llama.cpp's GGUF file format in https://github.com/ggerganov/llama.cpp/pull/6387 and the
https://huggingface.co/NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO-GGUF repo hasn't been updated since. This prevents one to memory-map the model, causing it to take much longer to load than needed when the file is already in the IO cache.
|
EuphoriaReccords/JENNIEBP | EuphoriaReccords | 2024-05-18T21:47:00Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2024-05-17T02:25:37Z | ---
license: apache-2.0
---
|
zsedrotam/llama2_cc | zsedrotam | 2024-05-18T21:44:44Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"region:us"
] | null | 2024-05-17T13:44:06Z | ---
library_name: peft
base_model: meta-llama/Llama-2-7b-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.10.0 |
HRSMalik/EmotModel | HRSMalik | 2024-05-18T21:31:30Z | 105 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-12-28T14:49:32Z | ---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
{}
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
maneln/tinyllama-chat-1.1b-version | maneln | 2024-05-18T21:25:33Z | 130 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-18T17:15:46Z | ---
license: apache-2.0
---
|
tsavage68/MedQA_L3_500steps_1e7rate_SFT | tsavage68 | 2024-05-18T21:25:32Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"generated_from_trainer",
"conversational",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:finetune:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-18T21:11:24Z | ---
license: llama3
base_model: meta-llama/Meta-Llama-3-8B-Instruct
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: MedQA_L3_500steps_1e7rate_SFT
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MedQA_L3_500steps_1e7rate_SFT
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3157
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-07
- train_batch_size: 2
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.774 | 0.0489 | 50 | 1.7867 |
| 1.7099 | 0.0977 | 100 | 1.6989 |
| 1.5892 | 0.1466 | 150 | 1.5687 |
| 1.4868 | 0.1954 | 200 | 1.4685 |
| 1.4001 | 0.2443 | 250 | 1.3929 |
| 1.3564 | 0.2931 | 300 | 1.3457 |
| 1.3261 | 0.3420 | 350 | 1.3226 |
| 1.3101 | 0.3908 | 400 | 1.3163 |
| 1.3032 | 0.4397 | 450 | 1.3159 |
| 1.3189 | 0.4885 | 500 | 1.3157 |
### Framework versions
- Transformers 4.41.0
- Pytorch 2.0.0+cu117
- Datasets 2.19.1
- Tokenizers 0.19.1
|
yzhuang/Meta-Llama-3-8B-Instruct_fictional_arc_challenge_Chinese_v2 | yzhuang | 2024-05-18T21:19:30Z | 8 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"generated_from_trainer",
"conversational",
"dataset:generator",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:finetune:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-17T06:15:21Z | ---
license: llama3
base_model: meta-llama/Meta-Llama-3-8B-Instruct
tags:
- trl
- sft
- generated_from_trainer
datasets:
- generator
model-index:
- name: Meta-Llama-3-8B-Instruct_fictional_arc_challenge_Chinese_v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/yufanz/autotree/runs/7283910327.75521-df0dd9e4-b029-4f7b-b0df-488a352215cc)
# Meta-Llama-3-8B-Instruct_fictional_arc_challenge_Chinese_v2
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 36
### Training results
### Framework versions
- Transformers 4.41.0
- Pytorch 2.1.0a0+32f93b1
- Datasets 2.19.1
- Tokenizers 0.19.1
|
amc5/q-Taxi-v3 | amc5 | 2024-05-18T21:17:38Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2024-05-18T21:17:35Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.50 +/- 2.76
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="amc5/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Bienvenu2004/donut-handball-pv7 | Bienvenu2004 | 2024-05-18T21:17:04Z | 6 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:Bienvenu2004/donut-handball-pv6",
"base_model:finetune:Bienvenu2004/donut-handball-pv6",
"license:mit",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2024-05-17T13:06:13Z | ---
license: mit
base_model: Bienvenu2004/donut-handball-pv6
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: donut-handball-pv7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# donut-handball-pv7
This model is a fine-tuned version of [Bienvenu2004/donut-handball-pv6](https://huggingface.co/Bienvenu2004/donut-handball-pv6) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
### Framework versions
- Transformers 4.42.0.dev0
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
FadyRezk/Liouna | FadyRezk | 2024-05-18T21:13:18Z | 51 | 0 | transformers | [
"transformers",
"safetensors",
"pytorch_model_hub_mixin",
"model_hub_mixin",
"endpoints_compatible",
"region:us"
] | null | 2024-05-18T20:51:16Z | ---
tags:
- pytorch_model_hub_mixin
- model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Library: [More Information Needed]
- Docs: [More Information Needed] |
Subsets and Splits