modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-03 06:27:42
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 535
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-03 06:27:02
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
tgrhn/wav2vec2-conformer-turkish-2
|
tgrhn
| 2024-06-26T00:43:14Z | 10 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"wav2vec2-conformer",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-06-22T14:50:27Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
samvelkoch/masked-mamba
|
samvelkoch
| 2024-06-26T00:24:40Z | 10 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"gpt",
"llm",
"large language model",
"h2o-llmstudio",
"en",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2024-06-26T00:20:22Z |
---
language:
- en
library_name: transformers
tags:
- gpt
- llm
- large language model
- h2o-llmstudio
inference: false
thumbnail: https://h2o.ai/etc.clientlibs/h2o/clientlibs/clientlib-site/resources/images/favicon.ico
---
# Model Card
## Summary
This model was trained using [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio).
- Base model: [h2oai/h2ogpt-4096-llama2-7b](https://huggingface.co/h2oai/h2ogpt-4096-llama2-7b)
## Usage
To use the model with the `transformers` library on a machine with GPUs, first make sure you have the `transformers` library installed.
```bash
pip install transformers==4.40.2
```
Also make sure you are providing your huggingface token if the model is lying in a private repo.
- You can login to hugginface_hub by running
```python
import huggingface_hub
huggingface_hub.login(<ACCESS_TOKEN>)
```
You will also need to download the classification head, either manually, or by running the following code:
```python
from huggingface_hub import hf_hub_download
model_name = "samvelkoch/masked-mamba" # either local folder or huggingface model name
hf_hub_download(repo_id=model_name, filename="classification_head.pth", local_dir="./")
```
You can make classification predictions by following the example below:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "samvelkoch/masked-mamba" # either local folder or huggingface model name
# Important: The prompt needs to be in the same format the model was trained with.
# You can find an example prompt in the experiment logs.
prompt = "How are you?"
tokenizer = AutoTokenizer.from_pretrained(
model_name,
trust_remote_code=True,
)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map={"": "cuda:0"},
trust_remote_code=True,
).cuda().eval()
head_weights = torch.load("classification_head.pth", map_location="cuda")
# settings can be arbitrary here as we overwrite with saved weights
head = torch.nn.Linear(1, 1, bias=False).to("cuda")
head.weight.data = head_weights
inputs = tokenizer(prompt, return_tensors="pt", add_special_tokens=False).to("cuda")
out = model(**inputs).logits
logits = head(out[:,-1])
print(logits)
```
## Quantization and sharding
You can load the models using quantization by specifying ```load_in_8bit=True``` or ```load_in_4bit=True```. Also, sharding on multiple GPUs is possible by setting ```device_map=auto```.
## Model Architecture
```
LlamaForCausalLM(
(model): LlamaModel(
(embed_tokens): Embedding(32000, 4096, padding_idx=0)
(layers): ModuleList(
(0-31): 32 x LlamaDecoderLayer(
(self_attn): LlamaSdpaAttention(
(q_proj): Linear(in_features=4096, out_features=4096, bias=False)
(k_proj): Linear(in_features=4096, out_features=4096, bias=False)
(v_proj): Linear(in_features=4096, out_features=4096, bias=False)
(o_proj): Linear(in_features=4096, out_features=4096, bias=False)
(rotary_emb): LlamaRotaryEmbedding()
)
(mlp): LlamaMLP(
(gate_proj): Linear(in_features=4096, out_features=11008, bias=False)
(up_proj): Linear(in_features=4096, out_features=11008, bias=False)
(down_proj): Linear(in_features=11008, out_features=4096, bias=False)
(act_fn): SiLU()
)
(input_layernorm): LlamaRMSNorm()
(post_attention_layernorm): LlamaRMSNorm()
)
)
(norm): LlamaRMSNorm()
)
(lm_head): Linear(in_features=4096, out_features=32000, bias=False)
)
```
## Model Configuration
This model was trained using H2O LLM Studio and with the configuration in [cfg.yaml](cfg.yaml). Visit [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio) to learn how to train your own large language models.
## Disclaimer
Please read this disclaimer carefully before using the large language model provided in this repository. Your use of the model signifies your agreement to the following terms and conditions.
- Biases and Offensiveness: The large language model is trained on a diverse range of internet text data, which may contain biased, racist, offensive, or otherwise inappropriate content. By using this model, you acknowledge and accept that the generated content may sometimes exhibit biases or produce content that is offensive or inappropriate. The developers of this repository do not endorse, support, or promote any such content or viewpoints.
- Limitations: The large language model is an AI-based tool and not a human. It may produce incorrect, nonsensical, or irrelevant responses. It is the user's responsibility to critically evaluate the generated content and use it at their discretion.
- Use at Your Own Risk: Users of this large language model must assume full responsibility for any consequences that may arise from their use of the tool. The developers and contributors of this repository shall not be held liable for any damages, losses, or harm resulting from the use or misuse of the provided model.
- Ethical Considerations: Users are encouraged to use the large language model responsibly and ethically. By using this model, you agree not to use it for purposes that promote hate speech, discrimination, harassment, or any form of illegal or harmful activities.
- Reporting Issues: If you encounter any biased, offensive, or otherwise inappropriate content generated by the large language model, please report it to the repository maintainers through the provided channels. Your feedback will help improve the model and mitigate potential issues.
- Changes to this Disclaimer: The developers of this repository reserve the right to modify or update this disclaimer at any time without prior notice. It is the user's responsibility to periodically review the disclaimer to stay informed about any changes.
By using the large language model provided in this repository, you agree to accept and comply with the terms and conditions outlined in this disclaimer. If you do not agree with any part of this disclaimer, you should refrain from using the model and any content generated by it.
|
Anujgr8/Whisper-Anuj-Medum-Odia
|
Anujgr8
| 2024-06-26T00:06:27Z | 8 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-06-25T18:27:20Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
RichardErkhov/Noodlz_-_DolphinStar-12.5B-gguf
|
RichardErkhov
| 2024-06-25T23:52:31Z | 17 | 0 | null |
[
"gguf",
"arxiv:2203.05482",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-06-25T19:23:11Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
DolphinStar-12.5B - GGUF
- Model creator: https://huggingface.co/Noodlz/
- Original model: https://huggingface.co/Noodlz/DolphinStar-12.5B/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [DolphinStar-12.5B.Q2_K.gguf](https://huggingface.co/RichardErkhov/Noodlz_-_DolphinStar-12.5B-gguf/blob/main/DolphinStar-12.5B.Q2_K.gguf) | Q2_K | 4.33GB |
| [DolphinStar-12.5B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Noodlz_-_DolphinStar-12.5B-gguf/blob/main/DolphinStar-12.5B.IQ3_XS.gguf) | IQ3_XS | 4.81GB |
| [DolphinStar-12.5B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Noodlz_-_DolphinStar-12.5B-gguf/blob/main/DolphinStar-12.5B.IQ3_S.gguf) | IQ3_S | 5.07GB |
| [DolphinStar-12.5B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Noodlz_-_DolphinStar-12.5B-gguf/blob/main/DolphinStar-12.5B.Q3_K_S.gguf) | Q3_K_S | 5.04GB |
| [DolphinStar-12.5B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Noodlz_-_DolphinStar-12.5B-gguf/blob/main/DolphinStar-12.5B.IQ3_M.gguf) | IQ3_M | 5.24GB |
| [DolphinStar-12.5B.Q3_K.gguf](https://huggingface.co/RichardErkhov/Noodlz_-_DolphinStar-12.5B-gguf/blob/main/DolphinStar-12.5B.Q3_K.gguf) | Q3_K | 5.62GB |
| [DolphinStar-12.5B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Noodlz_-_DolphinStar-12.5B-gguf/blob/main/DolphinStar-12.5B.Q3_K_M.gguf) | Q3_K_M | 5.62GB |
| [DolphinStar-12.5B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Noodlz_-_DolphinStar-12.5B-gguf/blob/main/DolphinStar-12.5B.Q3_K_L.gguf) | Q3_K_L | 6.11GB |
| [DolphinStar-12.5B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Noodlz_-_DolphinStar-12.5B-gguf/blob/main/DolphinStar-12.5B.IQ4_XS.gguf) | IQ4_XS | 6.3GB |
| [DolphinStar-12.5B.Q4_0.gguf](https://huggingface.co/RichardErkhov/Noodlz_-_DolphinStar-12.5B-gguf/blob/main/DolphinStar-12.5B.Q4_0.gguf) | Q4_0 | 6.57GB |
| [DolphinStar-12.5B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Noodlz_-_DolphinStar-12.5B-gguf/blob/main/DolphinStar-12.5B.IQ4_NL.gguf) | IQ4_NL | 6.64GB |
| [DolphinStar-12.5B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Noodlz_-_DolphinStar-12.5B-gguf/blob/main/DolphinStar-12.5B.Q4_K_S.gguf) | Q4_K_S | 6.62GB |
| [DolphinStar-12.5B.Q4_K.gguf](https://huggingface.co/RichardErkhov/Noodlz_-_DolphinStar-12.5B-gguf/blob/main/DolphinStar-12.5B.Q4_K.gguf) | Q4_K | 6.99GB |
| [DolphinStar-12.5B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Noodlz_-_DolphinStar-12.5B-gguf/blob/main/DolphinStar-12.5B.Q4_K_M.gguf) | Q4_K_M | 6.99GB |
| [DolphinStar-12.5B.Q4_1.gguf](https://huggingface.co/RichardErkhov/Noodlz_-_DolphinStar-12.5B-gguf/blob/main/DolphinStar-12.5B.Q4_1.gguf) | Q4_1 | 7.29GB |
| [DolphinStar-12.5B.Q5_0.gguf](https://huggingface.co/RichardErkhov/Noodlz_-_DolphinStar-12.5B-gguf/blob/main/DolphinStar-12.5B.Q5_0.gguf) | Q5_0 | 8.01GB |
| [DolphinStar-12.5B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Noodlz_-_DolphinStar-12.5B-gguf/blob/main/DolphinStar-12.5B.Q5_K_S.gguf) | Q5_K_S | 8.01GB |
| [DolphinStar-12.5B.Q5_K.gguf](https://huggingface.co/RichardErkhov/Noodlz_-_DolphinStar-12.5B-gguf/blob/main/DolphinStar-12.5B.Q5_K.gguf) | Q5_K | 8.22GB |
| [DolphinStar-12.5B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Noodlz_-_DolphinStar-12.5B-gguf/blob/main/DolphinStar-12.5B.Q5_K_M.gguf) | Q5_K_M | 8.22GB |
| [DolphinStar-12.5B.Q5_1.gguf](https://huggingface.co/RichardErkhov/Noodlz_-_DolphinStar-12.5B-gguf/blob/main/DolphinStar-12.5B.Q5_1.gguf) | Q5_1 | 8.73GB |
| [DolphinStar-12.5B.Q6_K.gguf](https://huggingface.co/RichardErkhov/Noodlz_-_DolphinStar-12.5B-gguf/blob/main/DolphinStar-12.5B.Q6_K.gguf) | Q6_K | 9.53GB |
| [DolphinStar-12.5B.Q8_0.gguf](https://huggingface.co/RichardErkhov/Noodlz_-_DolphinStar-12.5B-gguf/blob/main/DolphinStar-12.5B.Q8_0.gguf) | Q8_0 | 12.35GB |
Original model description:
---
license: apache-2.0
---

Custom Model "Dolphin2Star1" Merged by Noodlz.
12.5B linear merged from the uncensored mistral 7B v0.2 as the base, with the fine tunes of StarlingLM 7B Beta that's originally mistral 7B v0.1
have fun =)
[EDIT] - preset wise it seems like it likes the "ChatML" format.
[EDIT 2] - Usage Notes - model is sorta picky with the batch size and prompt preset/template. (maybe because merge of ChatML and OpenChat models)
My current recommended setting & findings
- Using LM Studio - use the default preset. GPU acceleration to max. prompt eval size to 1024, context length to 32768. this yields me decent, coherant results. ChatML works too but occasionall spits up odd texts after a couple of turns.
- Using Oobabooga (Windows PC) - runs well using run-in-4bit along with use_flash_attention_2. default presets and everything works just fine.
- Using OobaBooga (Mac) - [investigating]
## Instructions Template:
```
{% if not add_generation_prompt is defined %}{% set add_generation_prompt = false %}{% endif %}{{ '<s>' }}{% for message in messages %}{{'<|im_start|>' + message['role'] + '
' + message['content'] + '<|im_end|>' + '
'}}{% endfor %}{% if add_generation_prompt %}{{ '<|im_start|>assistant
' }}{% endif %}
```
## Chat Template:
```
{%- for message in messages %}
{%- if message['role'] == 'system' -%}
{%- if message['content'] -%}
{{- message['content'] + '\n\n' -}}
{%- endif -%}
{%- if user_bio -%}
{{- user_bio + '\n\n' -}}
{%- endif -%}
{%- else -%}
{%- if message['role'] == 'user' -%}
{{- name1 + ': ' + message['content'] + '\n'-}}
{%- else -%}
{{- name2 + ': ' + message['content'] + '\n' -}}
{%- endif -%}
{%- endif -%}
{%- endfor -%}
```
---
license: apache-2.0
---
---
base_model:
- cognitivecomputations/dolphin-2.8-mistral-7b-v02
- NexusFlow/Starling-LM-7B-beta
library_name: transformers
tags:
- mergekit
- merge
---
# output_folder
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [linear](https://arxiv.org/abs/2203.05482) merge method.
### Models Merged
The following models were included in the merge:
* [cognitivecomputations/dolphin-2.8-mistral-7b-v02](https://huggingface.co/cognitivecomputations/dolphin-2.8-mistral-7b-v02)
* [NexusFlow/Starling-LM-7B-beta](https://huggingface.co/NexusFlow/Starling-LM-7B-beta)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
merge_method: linear
parameters:
weight: 1.0
slices:
- sources:
- model: cognitivecomputations/dolphin-2.8-mistral-7b-v02
layer_range: [0,1]
- model: NexusFlow/Starling-LM-7B-beta
layer_range: [0,1]
parameters:
weight: 0
- sources:
- model: cognitivecomputations/dolphin-2.8-mistral-7b-v02
layer_range: [1,8]
- sources:
- model: NexusFlow/Starling-LM-7B-beta
layer_range: [4,12]
- sources:
- model: cognitivecomputations/dolphin-2.8-mistral-7b-v02
layer_range: [8,16]
- sources:
- model: NexusFlow/Starling-LM-7B-beta
layer_range: [12,20]
- sources:
- model: cognitivecomputations/dolphin-2.8-mistral-7b-v02
layer_range: [16,24]
- sources:
- model: NexusFlow/Starling-LM-7B-beta
layer_range: [20,28]
- sources:
- model: cognitivecomputations/dolphin-2.8-mistral-7b-v02
layer_range: [24,31]
- sources:
- model: cognitivecomputations/dolphin-2.8-mistral-7b-v02
layer_range: [31,32]
- model: NexusFlow/Starling-LM-7B-beta
layer_range: [31,32]
parameters:
weight: 0
dtype: float16
tokenizer_source: model:cognitivecomputations/dolphin-2.8-mistral-7b-v02
```
|
bartowski/Einstein-v7-Qwen2-7B-GGUF
|
bartowski
| 2024-06-25T23:33:55Z | 223 | 7 | null |
[
"gguf",
"axolotl",
"instruct",
"finetune",
"chatml",
"gpt4",
"synthetic data",
"science",
"physics",
"chemistry",
"biology",
"math",
"qwen",
"qwen2",
"text-generation",
"en",
"dataset:allenai/ai2_arc",
"dataset:camel-ai/physics",
"dataset:camel-ai/chemistry",
"dataset:camel-ai/biology",
"dataset:camel-ai/math",
"dataset:metaeval/reclor",
"dataset:openbookqa",
"dataset:mandyyyyii/scibench",
"dataset:derek-thomas/ScienceQA",
"dataset:TIGER-Lab/ScienceEval",
"dataset:jondurbin/airoboros-3.2",
"dataset:LDJnr/Capybara",
"dataset:Cot-Alpaca-GPT4-From-OpenHermes-2.5",
"dataset:STEM-AI-mtl/Electrical-engineering",
"dataset:knowrohit07/saraswati-stem",
"dataset:sablo/oasst2_curated",
"dataset:lmsys/lmsys-chat-1m",
"dataset:TIGER-Lab/MathInstruct",
"dataset:bigbio/med_qa",
"dataset:meta-math/MetaMathQA-40K",
"dataset:piqa",
"dataset:scibench",
"dataset:sciq",
"dataset:Open-Orca/SlimOrca",
"dataset:migtissera/Synthia-v1.3",
"dataset:allenai/WildChat",
"dataset:microsoft/orca-math-word-problems-200k",
"dataset:openchat/openchat_sharegpt4_dataset",
"dataset:teknium/GPTeacher-General-Instruct",
"dataset:m-a-p/CodeFeedback-Filtered-Instruction",
"dataset:totally-not-an-llm/EverythingLM-data-V3",
"dataset:HuggingFaceH4/no_robots",
"dataset:OpenAssistant/oasst_top1_2023-08-25",
"dataset:WizardLM/WizardLM_evol_instruct_70k",
"dataset:abacusai/SystemChat-1.1",
"dataset:H-D-T/Buzz-V1.2",
"base_model:Qwen/Qwen2-7B",
"base_model:quantized:Qwen/Qwen2-7B",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2024-06-25T23:11:04Z |
---
language:
- en
license: other
tags:
- axolotl
- instruct
- finetune
- chatml
- gpt4
- synthetic data
- science
- physics
- chemistry
- biology
- math
- qwen
- qwen2
base_model: Qwen/Qwen2-7B
datasets:
- allenai/ai2_arc
- camel-ai/physics
- camel-ai/chemistry
- camel-ai/biology
- camel-ai/math
- metaeval/reclor
- openbookqa
- mandyyyyii/scibench
- derek-thomas/ScienceQA
- TIGER-Lab/ScienceEval
- jondurbin/airoboros-3.2
- LDJnr/Capybara
- Cot-Alpaca-GPT4-From-OpenHermes-2.5
- STEM-AI-mtl/Electrical-engineering
- knowrohit07/saraswati-stem
- sablo/oasst2_curated
- lmsys/lmsys-chat-1m
- TIGER-Lab/MathInstruct
- bigbio/med_qa
- meta-math/MetaMathQA-40K
- openbookqa
- piqa
- metaeval/reclor
- derek-thomas/ScienceQA
- scibench
- sciq
- Open-Orca/SlimOrca
- migtissera/Synthia-v1.3
- TIGER-Lab/ScienceEval
- allenai/WildChat
- microsoft/orca-math-word-problems-200k
- openchat/openchat_sharegpt4_dataset
- teknium/GPTeacher-General-Instruct
- m-a-p/CodeFeedback-Filtered-Instruction
- totally-not-an-llm/EverythingLM-data-V3
- HuggingFaceH4/no_robots
- OpenAssistant/oasst_top1_2023-08-25
- WizardLM/WizardLM_evol_instruct_70k
- abacusai/SystemChat-1.1
- H-D-T/Buzz-V1.2
quantized_by: bartowski
pipeline_tag: text-generation
---
## Llamacpp imatrix Quantizations of Einstein-v7-Qwen2-7B
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b3197">b3197</a> for quantization.
Original model: https://huggingface.co/Weyaxi/Einstein-v7-Qwen2-7B
All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8)
## Prompt format
```
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Einstein-v7-Qwen2-7B-Q8_0_L.gguf](https://huggingface.co/bartowski/Einstein-v7-Qwen2-7B-GGUF/blob/main/Einstein-v7-Qwen2-7B-Q8_1.gguf) | Q8_0_L | 9.12GB | *Experimental*, uses f16 for embed and output weights. Please provide any feedback of differences. Extremely high quality, generally unneeded but max available quant. |
| [Einstein-v7-Qwen2-7B-Q8_0.gguf](https://huggingface.co/bartowski/Einstein-v7-Qwen2-7B-GGUF/blob/main/Einstein-v7-Qwen2-7B-Q8_0.gguf) | Q8_0 | 8.09GB | Extremely high quality, generally unneeded but max available quant. |
| [Einstein-v7-Qwen2-7B-Q6_K_L.gguf](https://huggingface.co/bartowski/Einstein-v7-Qwen2-7B-GGUF/blob/main/Einstein-v7-Qwen2-7B-Q6_K_L.gguf) | Q6_K_L | 7.54GB | *Experimental*, uses f16 for embed and output weights. Please provide any feedback of differences. Very high quality, near perfect, *recommended*. |
| [Einstein-v7-Qwen2-7B-Q6_K.gguf](https://huggingface.co/bartowski/Einstein-v7-Qwen2-7B-GGUF/blob/main/Einstein-v7-Qwen2-7B-Q6_K.gguf) | Q6_K | 6.25GB | Very high quality, near perfect, *recommended*. |
| [Einstein-v7-Qwen2-7B-Q5_K_L.gguf](https://huggingface.co/bartowski/Einstein-v7-Qwen2-7B-GGUF/blob/main/Einstein-v7-Qwen2-7B-Q5_K_L.gguf) | Q5_K_L | 6.80GB | *Experimental*, uses f16 for embed and output weights. Please provide any feedback of differences. High quality, *recommended*. |
| [Einstein-v7-Qwen2-7B-Q5_K_M.gguf](https://huggingface.co/bartowski/Einstein-v7-Qwen2-7B-GGUF/blob/main/Einstein-v7-Qwen2-7B-Q5_K_M.gguf) | Q5_K_M | 5.44GB | High quality, *recommended*. |
| [Einstein-v7-Qwen2-7B-Q5_K_S.gguf](https://huggingface.co/bartowski/Einstein-v7-Qwen2-7B-GGUF/blob/main/Einstein-v7-Qwen2-7B-Q5_K_S.gguf) | Q5_K_S | 5.31GB | High quality, *recommended*. |
| [Einstein-v7-Qwen2-7B-Q4_K_L.gguf](https://huggingface.co/bartowski/Einstein-v7-Qwen2-7B-GGUF/blob/main/Einstein-v7-Qwen2-7B-Q4_K_L.gguf) | Q4_K_L | 6.10GB | *Experimental*, uses f16 for embed and output weights. Please provide any feedback of differences. Good quality, uses about 4.83 bits per weight, *recommended*. |
| [Einstein-v7-Qwen2-7B-Q4_K_M.gguf](https://huggingface.co/bartowski/Einstein-v7-Qwen2-7B-GGUF/blob/main/Einstein-v7-Qwen2-7B-Q4_K_M.gguf) | Q4_K_M | 4.68GB | Good quality, uses about 4.83 bits per weight, *recommended*. |
| [Einstein-v7-Qwen2-7B-Q4_K_S.gguf](https://huggingface.co/bartowski/Einstein-v7-Qwen2-7B-GGUF/blob/main/Einstein-v7-Qwen2-7B-Q4_K_S.gguf) | Q4_K_S | 4.45GB | Slightly lower quality with more space savings, *recommended*. |
| [Einstein-v7-Qwen2-7B-IQ4_XS.gguf](https://huggingface.co/bartowski/Einstein-v7-Qwen2-7B-GGUF/blob/main/Einstein-v7-Qwen2-7B-IQ4_XS.gguf) | IQ4_XS | 4.21GB | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
| [Einstein-v7-Qwen2-7B-Q3_K_XL.gguf](https://huggingface.co/bartowski/Einstein-v7-Qwen2-7B-GGUF//main/Einstein-v7-Qwen2-7B-Q3_K_XL.gguf) | Q3_K_XL | | *Experimental*, uses f16 for embed and output weights. Please provide any feedback of differences. Lower quality but usable, good for low RAM availability. |
| [Einstein-v7-Qwen2-7B-Q3_K_L.gguf](https://huggingface.co/bartowski/Einstein-v7-Qwen2-7B-GGUF/blob/main/Einstein-v7-Qwen2-7B-Q3_K_L.gguf) | Q3_K_L | 4.08GB | Lower quality but usable, good for low RAM availability. |
| [Einstein-v7-Qwen2-7B-Q3_K_M.gguf](https://huggingface.co/bartowski/Einstein-v7-Qwen2-7B-GGUF/blob/main/Einstein-v7-Qwen2-7B-Q3_K_M.gguf) | Q3_K_M | 3.80GB | Even lower quality. |
| [Einstein-v7-Qwen2-7B-IQ3_M.gguf](https://huggingface.co/bartowski/Einstein-v7-Qwen2-7B-GGUF/blob/main/Einstein-v7-Qwen2-7B-IQ3_M.gguf) | IQ3_M | 3.57GB | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| [Einstein-v7-Qwen2-7B-Q3_K_S.gguf](https://huggingface.co/bartowski/Einstein-v7-Qwen2-7B-GGUF/blob/main/Einstein-v7-Qwen2-7B-Q3_K_S.gguf) | Q3_K_S | 3.49GB | Low quality, not recommended. |
| [Einstein-v7-Qwen2-7B-IQ3_XS.gguf](https://huggingface.co/bartowski/Einstein-v7-Qwen2-7B-GGUF/blob/main/Einstein-v7-Qwen2-7B-IQ3_XS.gguf) | IQ3_XS | 3.34GB | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| [Einstein-v7-Qwen2-7B-IQ3_XXS.gguf](https://huggingface.co/bartowski/Einstein-v7-Qwen2-7B-GGUF/blob/main/Einstein-v7-Qwen2-7B-IQ3_XXS.gguf) | IQ3_XXS | 3.11GB | Lower quality, new method with decent performance, comparable to Q3 quants. |
| [Einstein-v7-Qwen2-7B-Q2_K.gguf](https://huggingface.co/bartowski/Einstein-v7-Qwen2-7B-GGUF/blob/main/Einstein-v7-Qwen2-7B-Q2_K.gguf) | Q2_K | 3.01GB | Very low quality but surprisingly usable. |
| [Einstein-v7-Qwen2-7B-IQ2_M.gguf](https://huggingface.co/bartowski/Einstein-v7-Qwen2-7B-GGUF/blob/main/Einstein-v7-Qwen2-7B-IQ2_M.gguf) | IQ2_M | 2.78GB | Very low quality, uses SOTA techniques to also be surprisingly usable. |
| [Einstein-v7-Qwen2-7B-IQ2_S.gguf](https://huggingface.co/bartowski/Einstein-v7-Qwen2-7B-GGUF/blob/main/Einstein-v7-Qwen2-7B-IQ2_S.gguf) | IQ2_S | 2.59GB | Very low quality, uses SOTA techniques to be usable. |
| [Einstein-v7-Qwen2-7B-IQ2_XS.gguf](https://huggingface.co/bartowski/Einstein-v7-Qwen2-7B-GGUF/blob/main/Einstein-v7-Qwen2-7B-IQ2_XS.gguf) | IQ2_XS | 2.46GB | Very low quality, uses SOTA techniques to be usable. |
## Downloading using huggingface-cli
First, make sure you have hugginface-cli installed:
```
pip install -U "huggingface_hub[cli]"
```
Then, you can target the specific file you want:
```
huggingface-cli download bartowski/Einstein-v7-Qwen2-7B-GGUF --include "Einstein-v7-Qwen2-7B-Q4_K_M.gguf" --local-dir ./
```
If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
```
huggingface-cli download bartowski/Einstein-v7-Qwen2-7B-GGUF --include "Einstein-v7-Qwen2-7B-Q8_0.gguf/*" --local-dir Einstein-v7-Qwen2-7B-Q8_0
```
You can either specify a new local-dir (Einstein-v7-Qwen2-7B-Q8_0) or download them all in place (./)
## Which file should I choose?
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
[llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm.
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
tchen175/llama2-dolly-dataset-intel-1
|
tchen175
| 2024-06-25T23:32:21Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-25T23:29:47Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
DBangshu/gemma_e4_2_3
|
DBangshu
| 2024-06-25T23:31:30Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-25T18:27:57Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
athreesh/llava-medmnist
|
athreesh
| 2024-06-25T23:26:39Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llava_llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-25T23:03:15Z |
# LLaVA-MedMNIST
This is a fine-tuned version of LLaVA on the MedMNIST dataset for medical image analysis.
## Model description
LoRA finetuned on MedMNIST dataset (https://medmnist.com/) which contains 18x Standardized Datasets for 2D / 3D Biomedical Image Classification
## Intended uses & limitations
Still evaluating performance but the model should be able to classify 2D/3D Biomedical Images with high accuracy and provide user-friendly outputs.
## Evaluation results
Pending...
|
RichardErkhov/OpenBuddy_-_openbuddy-mistral-7b-v17.1-32k-gguf
|
RichardErkhov
| 2024-06-25T23:16:46Z | 33 | 0 | null |
[
"gguf",
"endpoints_compatible",
"region:us"
] | null | 2024-06-25T18:17:28Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
openbuddy-mistral-7b-v17.1-32k - GGUF
- Model creator: https://huggingface.co/OpenBuddy/
- Original model: https://huggingface.co/OpenBuddy/openbuddy-mistral-7b-v17.1-32k/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [openbuddy-mistral-7b-v17.1-32k.Q2_K.gguf](https://huggingface.co/RichardErkhov/OpenBuddy_-_openbuddy-mistral-7b-v17.1-32k-gguf/blob/main/openbuddy-mistral-7b-v17.1-32k.Q2_K.gguf) | Q2_K | 2.55GB |
| [openbuddy-mistral-7b-v17.1-32k.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/OpenBuddy_-_openbuddy-mistral-7b-v17.1-32k-gguf/blob/main/openbuddy-mistral-7b-v17.1-32k.IQ3_XS.gguf) | IQ3_XS | 2.83GB |
| [openbuddy-mistral-7b-v17.1-32k.IQ3_S.gguf](https://huggingface.co/RichardErkhov/OpenBuddy_-_openbuddy-mistral-7b-v17.1-32k-gguf/blob/main/openbuddy-mistral-7b-v17.1-32k.IQ3_S.gguf) | IQ3_S | 2.99GB |
| [openbuddy-mistral-7b-v17.1-32k.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/OpenBuddy_-_openbuddy-mistral-7b-v17.1-32k-gguf/blob/main/openbuddy-mistral-7b-v17.1-32k.Q3_K_S.gguf) | Q3_K_S | 2.97GB |
| [openbuddy-mistral-7b-v17.1-32k.IQ3_M.gguf](https://huggingface.co/RichardErkhov/OpenBuddy_-_openbuddy-mistral-7b-v17.1-32k-gguf/blob/main/openbuddy-mistral-7b-v17.1-32k.IQ3_M.gguf) | IQ3_M | 3.08GB |
| [openbuddy-mistral-7b-v17.1-32k.Q3_K.gguf](https://huggingface.co/RichardErkhov/OpenBuddy_-_openbuddy-mistral-7b-v17.1-32k-gguf/blob/main/openbuddy-mistral-7b-v17.1-32k.Q3_K.gguf) | Q3_K | 3.3GB |
| [openbuddy-mistral-7b-v17.1-32k.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/OpenBuddy_-_openbuddy-mistral-7b-v17.1-32k-gguf/blob/main/openbuddy-mistral-7b-v17.1-32k.Q3_K_M.gguf) | Q3_K_M | 3.3GB |
| [openbuddy-mistral-7b-v17.1-32k.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/OpenBuddy_-_openbuddy-mistral-7b-v17.1-32k-gguf/blob/main/openbuddy-mistral-7b-v17.1-32k.Q3_K_L.gguf) | Q3_K_L | 3.58GB |
| [openbuddy-mistral-7b-v17.1-32k.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/OpenBuddy_-_openbuddy-mistral-7b-v17.1-32k-gguf/blob/main/openbuddy-mistral-7b-v17.1-32k.IQ4_XS.gguf) | IQ4_XS | 3.7GB |
| [openbuddy-mistral-7b-v17.1-32k.Q4_0.gguf](https://huggingface.co/RichardErkhov/OpenBuddy_-_openbuddy-mistral-7b-v17.1-32k-gguf/blob/main/openbuddy-mistral-7b-v17.1-32k.Q4_0.gguf) | Q4_0 | 3.85GB |
| [openbuddy-mistral-7b-v17.1-32k.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/OpenBuddy_-_openbuddy-mistral-7b-v17.1-32k-gguf/blob/main/openbuddy-mistral-7b-v17.1-32k.IQ4_NL.gguf) | IQ4_NL | 3.89GB |
| [openbuddy-mistral-7b-v17.1-32k.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/OpenBuddy_-_openbuddy-mistral-7b-v17.1-32k-gguf/blob/main/openbuddy-mistral-7b-v17.1-32k.Q4_K_S.gguf) | Q4_K_S | 3.88GB |
| [openbuddy-mistral-7b-v17.1-32k.Q4_K.gguf](https://huggingface.co/RichardErkhov/OpenBuddy_-_openbuddy-mistral-7b-v17.1-32k-gguf/blob/main/openbuddy-mistral-7b-v17.1-32k.Q4_K.gguf) | Q4_K | 4.09GB |
| [openbuddy-mistral-7b-v17.1-32k.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/OpenBuddy_-_openbuddy-mistral-7b-v17.1-32k-gguf/blob/main/openbuddy-mistral-7b-v17.1-32k.Q4_K_M.gguf) | Q4_K_M | 4.09GB |
| [openbuddy-mistral-7b-v17.1-32k.Q4_1.gguf](https://huggingface.co/RichardErkhov/OpenBuddy_-_openbuddy-mistral-7b-v17.1-32k-gguf/blob/main/openbuddy-mistral-7b-v17.1-32k.Q4_1.gguf) | Q4_1 | 4.27GB |
| [openbuddy-mistral-7b-v17.1-32k.Q5_0.gguf](https://huggingface.co/RichardErkhov/OpenBuddy_-_openbuddy-mistral-7b-v17.1-32k-gguf/blob/main/openbuddy-mistral-7b-v17.1-32k.Q5_0.gguf) | Q5_0 | 4.68GB |
| [openbuddy-mistral-7b-v17.1-32k.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/OpenBuddy_-_openbuddy-mistral-7b-v17.1-32k-gguf/blob/main/openbuddy-mistral-7b-v17.1-32k.Q5_K_S.gguf) | Q5_K_S | 4.68GB |
| [openbuddy-mistral-7b-v17.1-32k.Q5_K.gguf](https://huggingface.co/RichardErkhov/OpenBuddy_-_openbuddy-mistral-7b-v17.1-32k-gguf/blob/main/openbuddy-mistral-7b-v17.1-32k.Q5_K.gguf) | Q5_K | 4.81GB |
| [openbuddy-mistral-7b-v17.1-32k.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/OpenBuddy_-_openbuddy-mistral-7b-v17.1-32k-gguf/blob/main/openbuddy-mistral-7b-v17.1-32k.Q5_K_M.gguf) | Q5_K_M | 4.81GB |
| [openbuddy-mistral-7b-v17.1-32k.Q5_1.gguf](https://huggingface.co/RichardErkhov/OpenBuddy_-_openbuddy-mistral-7b-v17.1-32k-gguf/blob/main/openbuddy-mistral-7b-v17.1-32k.Q5_1.gguf) | Q5_1 | 5.1GB |
| [openbuddy-mistral-7b-v17.1-32k.Q6_K.gguf](https://huggingface.co/RichardErkhov/OpenBuddy_-_openbuddy-mistral-7b-v17.1-32k-gguf/blob/main/openbuddy-mistral-7b-v17.1-32k.Q6_K.gguf) | Q6_K | 5.56GB |
| [openbuddy-mistral-7b-v17.1-32k.Q8_0.gguf](https://huggingface.co/RichardErkhov/OpenBuddy_-_openbuddy-mistral-7b-v17.1-32k-gguf/blob/main/openbuddy-mistral-7b-v17.1-32k.Q8_0.gguf) | Q8_0 | 7.2GB |
Original model description:
---
language:
- zh
- en
- fr
- de
- ja
- ko
- it
- ru
pipeline_tag: text-generation
inference: false
library_name: transformers
license: apache-2.0
---
# OpenBuddy - Open Multilingual Chatbot
GitHub and Usage Guide: [https://github.com/OpenBuddy/OpenBuddy](https://github.com/OpenBuddy/OpenBuddy)
Website and Demo: [https://openbuddy.ai](https://openbuddy.ai)
Evaluation result of this model: [Evaluation.txt](Evaluation.txt)

# Copyright Notice
Base model: https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2
License: Apache 2.0
## Disclaimer
All OpenBuddy models have inherent limitations and may potentially produce outputs that are erroneous, harmful, offensive, or otherwise undesirable. Users should not use these models in critical or high-stakes situations that may lead to personal injury, property damage, or significant losses. Examples of such scenarios include, but are not limited to, the medical field, controlling software and hardware systems that may cause harm, and making important financial or legal decisions.
OpenBuddy is provided "as-is" without any warranty of any kind, either express or implied, including, but not limited to, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement. In no event shall the authors, contributors, or copyright holders be liable for any claim, damages, or other liabilities, whether in an action of contract, tort, or otherwise, arising from, out of, or in connection with the software or the use or other dealings in the software.
By using OpenBuddy, you agree to these terms and conditions, and acknowledge that you understand the potential risks associated with its use. You also agree to indemnify and hold harmless the authors, contributors, and copyright holders from any claims, damages, or liabilities arising from your use of OpenBuddy.
## 免责声明
所有OpenBuddy模型均存在固有的局限性,可能产生错误的、有害的、冒犯性的或其他不良的输出。用户在关键或高风险场景中应谨慎行事,不要使用这些模型,以免导致人身伤害、财产损失或重大损失。此类场景的例子包括但不限于医疗领域、可能导致伤害的软硬件系统的控制以及进行重要的财务或法律决策。
OpenBuddy按“原样”提供,不附带任何种类的明示或暗示的保证,包括但不限于适销性、特定目的的适用性和非侵权的暗示保证。在任何情况下,作者、贡献者或版权所有者均不对因软件或使用或其他软件交易而产生的任何索赔、损害赔偿或其他责任(无论是合同、侵权还是其他原因)承担责任。
使用OpenBuddy即表示您同意这些条款和条件,并承认您了解其使用可能带来的潜在风险。您还同意赔偿并使作者、贡献者和版权所有者免受因您使用OpenBuddy而产生的任何索赔、损害赔偿或责任的影响。
|
mklissarov/paligemma_vqav2
|
mklissarov
| 2024-06-25T23:04:54Z | 11 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"paligemma",
"image-text-to-text",
"generated_from_trainer",
"base_model:google/paligemma-3b-pt-224",
"base_model:finetune:google/paligemma-3b-pt-224",
"license:gemma",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2024-06-25T22:58:24Z |
---
license: gemma
base_model: google/paligemma-3b-pt-224
tags:
- generated_from_trainer
model-index:
- name: paligemma_vqav2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# paligemma_vqav2
This model is a fine-tuned version of [google/paligemma-3b-pt-224](https://huggingface.co/google/paligemma-3b-pt-224) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.42.0.dev0
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
yiyic/mt5_me5_cmn_Hani_32_cls_inverter
|
yiyic
| 2024-06-25T23:00:54Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-06-25T22:06:03Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
jkodiyil/tinyllama-bnb-4bit-clva-gguf
|
jkodiyil
| 2024-06-25T22:57:49Z | 13 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/tinyllama-bnb-4bit",
"base_model:quantized:unsloth/tinyllama-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-06-25T21:31:49Z |
---
base_model: unsloth/tinyllama-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
---
# Uploaded model
- **Developed by:** jkodiyil
- **License:** apache-2.0
- **Finetuned from model :** unsloth/tinyllama-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
ZeroWw/Pythia-Chat-Base-7B-GGUF
|
ZeroWw
| 2024-06-25T22:48:50Z | 12 | 1 | null |
[
"gguf",
"en",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2024-06-25T22:38:15Z |
---
license: mit
language:
- en
---
My own (ZeroWw) quantizations.
output and embed tensors quantized to f16.
all other tensors quantized to q5_k or q6_k.
Result:
both f16.q6 and f16.q5 are smaller than q8_0 standard quantization
and they perform as well as the pure f16.
|
sofia-todeschini/BioElectra-LitCovid-1.4
|
sofia-todeschini
| 2024-06-25T22:45:48Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"electra",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-06-25T19:43:35Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: BioElectra-LitCovid-1.4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BioElectra-LitCovid-1.4
This model is a fine-tuned version of [kamalkraj/bioelectra-base-discriminator-pubmed](https://huggingface.co/kamalkraj/bioelectra-base-discriminator-pubmed) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6551
- Hamming loss: 0.1096
- F1 micro: 0.5375
- F1 macro: 0.4017
- F1 weighted: 0.6519
- F1 samples: 0.5520
- Precision micro: 0.3867
- Precision macro: 0.2948
- Precision weighted: 0.5638
- Precision samples: 0.4347
- Recall micro: 0.8813
- Recall macro: 0.8425
- Recall weighted: 0.8813
- Recall samples: 0.8977
- Roc Auc: 0.8862
- Accuracy: 0.0375
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Hamming loss | F1 micro | F1 macro | F1 weighted | F1 samples | Precision micro | Precision macro | Precision weighted | Precision samples | Recall micro | Recall macro | Recall weighted | Recall samples | Roc Auc | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------------:|:--------:|:--------:|:-----------:|:----------:|:---------------:|:---------------:|:------------------:|:-----------------:|:------------:|:------------:|:---------------:|:--------------:|:-------:|:--------:|
| 0.8117 | 1.0 | 1151 | 0.7562 | 0.1732 | 0.4140 | 0.3137 | 0.5784 | 0.4179 | 0.2740 | 0.2255 | 0.4949 | 0.2947 | 0.8462 | 0.8285 | 0.8462 | 0.8675 | 0.8357 | 0.0005 |
| 0.639 | 2.0 | 2303 | 0.6690 | 0.1346 | 0.4836 | 0.3618 | 0.6199 | 0.4952 | 0.3347 | 0.2629 | 0.5289 | 0.3716 | 0.8714 | 0.8448 | 0.8714 | 0.8906 | 0.8682 | 0.0095 |
| 0.556 | 3.0 | 3454 | 0.6453 | 0.1253 | 0.5012 | 0.3747 | 0.6358 | 0.5147 | 0.3519 | 0.2750 | 0.5539 | 0.3944 | 0.8706 | 0.8536 | 0.8706 | 0.8895 | 0.8728 | 0.0220 |
| 0.4906 | 4.0 | 4606 | 0.6567 | 0.1111 | 0.5339 | 0.4013 | 0.6494 | 0.5469 | 0.3832 | 0.2946 | 0.5608 | 0.4282 | 0.8800 | 0.8428 | 0.8800 | 0.8976 | 0.8848 | 0.0312 |
| 0.4594 | 5.0 | 5755 | 0.6551 | 0.1096 | 0.5375 | 0.4017 | 0.6519 | 0.5520 | 0.3867 | 0.2948 | 0.5638 | 0.4347 | 0.8813 | 0.8425 | 0.8813 | 0.8977 | 0.8862 | 0.0375 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
yiyic/mt5_me5_latn-script_32_cls_inverter
|
yiyic
| 2024-06-25T22:44:35Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-06-25T22:43:52Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Gregorig/deberta-v3-base-finetuned-emo_une
|
Gregorig
| 2024-06-25T22:44:18Z | 5 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"base_model:microsoft/deberta-v3-base",
"base_model:finetune:microsoft/deberta-v3-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-06-25T22:43:37Z |
---
license: mit
base_model: microsoft/deberta-v3-base
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: deberta-v3-base-finetuned-emo_une
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-base-finetuned-emo_une
This model is a fine-tuned version of [microsoft/deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4500
- Accuracy: 0.865
- F1: 0.8681
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.68 | 1.0 | 26 | 0.6269 | 0.585 | 0.6107 |
| 0.5312 | 2.0 | 52 | 0.4552 | 0.86 | 0.8578 |
| 0.3854 | 3.0 | 78 | 0.4478 | 0.84 | 0.8441 |
| 0.3005 | 4.0 | 104 | 0.4395 | 0.86 | 0.8644 |
| 0.258 | 5.0 | 130 | 0.4500 | 0.865 | 0.8681 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Tokenizers 0.19.1
|
yiyic/mt5_me5_semitic-fami_32_cls_inverter
|
yiyic
| 2024-06-25T22:40:25Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-06-25T22:39:38Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
sofia-todeschini/PubMedBERT-Large-LitCovid-1.4
|
sofia-todeschini
| 2024-06-25T22:37:51Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-06-25T21:22:14Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: PubMedBERT-Large-LitCovid-1.4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# PubMedBERT-Large-LitCovid-1.4
This model is a fine-tuned version of [microsoft/BiomedNLP-BiomedBERT-large-uncased-abstract](https://huggingface.co/microsoft/BiomedNLP-BiomedBERT-large-uncased-abstract) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6105
- Hamming loss: 0.0623
- F1 micro: 0.6724
- F1 macro: 0.5303
- F1 weighted: 0.7292
- F1 samples: 0.6741
- Precision micro: 0.5423
- Precision macro: 0.4146
- Precision weighted: 0.6499
- Precision samples: 0.5845
- Recall micro: 0.8849
- Recall macro: 0.8178
- Recall weighted: 0.8849
- Recall samples: 0.9022
- Roc Auc: 0.9133
- Accuracy: 0.1313
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Hamming loss | F1 micro | F1 macro | F1 weighted | F1 samples | Precision micro | Precision macro | Precision weighted | Precision samples | Recall micro | Recall macro | Recall weighted | Recall samples | Roc Auc | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------------:|:--------:|:--------:|:-----------:|:----------:|:---------------:|:---------------:|:------------------:|:-----------------:|:------------:|:------------:|:---------------:|:--------------:|:-------:|:--------:|
| 0.589 | 1.0 | 1151 | 0.5719 | 0.1031 | 0.5554 | 0.4307 | 0.6704 | 0.5629 | 0.4034 | 0.3213 | 0.5843 | 0.4435 | 0.8909 | 0.8673 | 0.8909 | 0.9062 | 0.8941 | 0.0363 |
| 0.4668 | 2.0 | 2302 | 0.5438 | 0.0836 | 0.6082 | 0.4623 | 0.6974 | 0.6147 | 0.4599 | 0.3478 | 0.6098 | 0.5052 | 0.8976 | 0.8556 | 0.8976 | 0.9123 | 0.9077 | 0.0774 |
| 0.3791 | 3.0 | 3453 | 0.5510 | 0.0790 | 0.6225 | 0.4829 | 0.7070 | 0.6247 | 0.4754 | 0.3661 | 0.6205 | 0.5140 | 0.9012 | 0.8541 | 0.9012 | 0.9165 | 0.9119 | 0.0759 |
| 0.307 | 4.0 | 4605 | 0.5954 | 0.0635 | 0.6688 | 0.5235 | 0.7280 | 0.6689 | 0.5371 | 0.4078 | 0.6477 | 0.5767 | 0.8863 | 0.8212 | 0.8863 | 0.9036 | 0.9134 | 0.1229 |
| 0.2687 | 5.0 | 5755 | 0.6105 | 0.0623 | 0.6724 | 0.5303 | 0.7292 | 0.6741 | 0.5423 | 0.4146 | 0.6499 | 0.5845 | 0.8849 | 0.8178 | 0.8849 | 0.9022 | 0.9133 | 0.1313 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.13.3
|
Gregorig/deberta-v3-base-finetuned-t_vendor
|
Gregorig
| 2024-06-25T22:36:58Z | 6 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"base_model:microsoft/deberta-v3-base",
"base_model:finetune:microsoft/deberta-v3-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-06-05T16:55:15Z |
---
license: mit
base_model: microsoft/deberta-v3-base
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: deberta-v3-base-finetuned-t_vendor
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-base-finetuned-t_vendor
This model is a fine-tuned version of [microsoft/deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3395
- Accuracy: 0.895
- F1: 0.9025
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.685 | 1.0 | 26 | 0.6479 | 0.51 | 0.5687 |
| 0.5257 | 2.0 | 52 | 0.3567 | 0.845 | 0.8637 |
| 0.2651 | 3.0 | 78 | 0.2971 | 0.885 | 0.8958 |
| 0.1958 | 4.0 | 104 | 0.3338 | 0.9 | 0.9068 |
| 0.1535 | 5.0 | 130 | 0.3395 | 0.895 | 0.9025 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Tokenizers 0.19.1
|
jaschadub/bge-large-en-world-news-osint-v1
|
jaschadub
| 2024-06-25T22:35:44Z | 6 | 2 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:5000",
"loss:MultipleNegativesRankingLoss",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:BAAI/bge-large-en",
"base_model:finetune:BAAI/bge-large-en",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2024-06-25T22:34:32Z |
---
base_model: BAAI/bge-large-en
datasets: []
language: []
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:5000
- loss:MultipleNegativesRankingLoss
widget:
- source_sentence: New treatments show promise in fight against antibiotic resistance
sentences:
- Ancient ruins discovered beneath modern city
- Medical researchers develop innovative approaches to combat superbugs
- Medical researchers develop innovative approaches to combat superbugs
- source_sentence: Breakthrough in artificial intelligence sparks ethical debates
sentences:
- Health organizations collaborate to end polio worldwide
- Ancient manuscript found in Egyptian tomb
- AI researchers discuss implications of new advancements
- source_sentence: UN condemns forced labor practices in multiple countries
sentences:
- Marine biologists raise alarm over rising ocean temperatures
- Global push for labor rights gains momentum
- Record number of endangered species found in protected area
- source_sentence: Stock markets plunge amid fears of global recession
sentences:
- Discovery of ancient shipwreck off Greek coast
- Health organizations collaborate to end polio worldwide
- Investors react to warning signs of economic downturn
- source_sentence: Scientists warn of accelerating ice melt in Antarctica
sentences:
- AI researchers discuss implications of new advancements
- Major breakthrough in AI technology
- New research highlights the urgency of addressing climate change
---
# SentenceTransformer based on BAAI/bge-large-en
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-large-en](https://huggingface.co/BAAI/bge-large-en). It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [BAAI/bge-large-en](https://huggingface.co/BAAI/bge-large-en) <!-- at revision abe7d9d814b775ca171121fb03f394dc42974275 -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 1024 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Scientists warn of accelerating ice melt in Antarctica',
'New research highlights the urgency of addressing climate change',
'Major breakthrough in AI technology',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.0.1
- Transformers: 4.41.2
- PyTorch: 2.3.0+cu121
- Accelerate: 0.31.0
- Datasets: 2.20.0
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
Gregorig/deberta-v3-base-finetuned-t_feedback
|
Gregorig
| 2024-06-25T22:34:37Z | 5 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"base_model:microsoft/deberta-v3-base",
"base_model:finetune:microsoft/deberta-v3-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-06-05T16:52:14Z |
---
license: mit
base_model: microsoft/deberta-v3-base
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: deberta-v3-base-finetuned-t_feedback
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-base-finetuned-t_feedback
This model is a fine-tuned version of [microsoft/deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0730
- Accuracy: 0.985
- F1: 0.9882
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.6709 | 1.0 | 26 | 0.5690 | 0.99 | 0.9850 |
| 0.4963 | 2.0 | 52 | 0.4966 | 0.825 | 0.8951 |
| 0.2759 | 3.0 | 78 | 0.2600 | 0.985 | 0.9865 |
| 0.4094 | 4.0 | 104 | 0.1055 | 0.99 | 0.9916 |
| 0.211 | 5.0 | 130 | 0.0730 | 0.985 | 0.9882 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Tokenizers 0.19.1
|
Gregorig/deberta-v3-base-finetuned-t_product
|
Gregorig
| 2024-06-25T22:32:03Z | 6 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"base_model:microsoft/deberta-v3-base",
"base_model:finetune:microsoft/deberta-v3-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-06-05T16:49:14Z |
---
license: mit
base_model: microsoft/deberta-v3-base
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: deberta-v3-base-finetuned-t_product
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-base-finetuned-t_product
This model is a fine-tuned version of [microsoft/deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3615
- Accuracy: 0.865
- F1: 0.8646
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.6779 | 1.0 | 26 | 0.6194 | 0.63 | 0.6136 |
| 0.512 | 2.0 | 52 | 0.4479 | 0.825 | 0.8227 |
| 0.3284 | 3.0 | 78 | 0.3676 | 0.865 | 0.8655 |
| 0.2196 | 4.0 | 104 | 0.3581 | 0.86 | 0.8602 |
| 0.1563 | 5.0 | 130 | 0.3615 | 0.865 | 0.8646 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Tokenizers 0.19.1
|
yiyic/mt5_me5_turkic-fami_32_cls_inverter
|
yiyic
| 2024-06-25T22:31:43Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-06-25T22:31:02Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
PwNzDust/misogyny_multi_five_bertweet
|
PwNzDust
| 2024-06-25T22:28:05Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-03T19:14:13Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
hchcsuim/batch-size16_DFDC_opencv-1FPS_faces-expand50-aligned_unaugmentation
|
hchcsuim
| 2024-06-25T22:24:47Z | 32 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/swin-tiny-patch4-window7-224",
"base_model:finetune:microsoft/swin-tiny-patch4-window7-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-06-25T08:09:27Z |
---
license: apache-2.0
base_model: microsoft/swin-tiny-patch4-window7-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: batch-size16_DFDC_opencv-1FPS_faces-expand50-aligned_unaugmentation
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9796104469388703
- name: Precision
type: precision
value: 0.9855379233795052
- name: Recall
type: recall
value: 0.9902568023582036
- name: F1
type: f1
value: 0.987891727713915
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# batch-size16_DFDC_opencv-1FPS_faces-expand50-aligned_unaugmentation
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0529
- Accuracy: 0.9796
- Precision: 0.9855
- Recall: 0.9903
- F1: 0.9879
- Roc Auc: 0.9965
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | Roc Auc |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:---------:|:------:|:------:|:-------:|
| 0.0568 | 1.0 | 18831 | 0.0529 | 0.9796 | 0.9855 | 0.9903 | 0.9879 | 0.9965 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.1
- Datasets 2.20.0
- Tokenizers 0.19.1
|
BigHuggyD/sophosympatheia_New-Dawn-Llama-3-70B-32K-v1.0_exl2_5.5bpw_h8
|
BigHuggyD
| 2024-06-25T22:24:11Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"Not-for-all-Audiences",
"conversational",
"base_model:Sao10K/L3-70B-Euryale-v2.1",
"base_model:merge:Sao10K/L3-70B-Euryale-v2.1",
"base_model:abacusai/Smaug-Llama-3-70B-Instruct",
"base_model:merge:abacusai/Smaug-Llama-3-70B-Instruct",
"base_model:abacusai/Smaug-Llama-3-70B-Instruct-32K",
"base_model:merge:abacusai/Smaug-Llama-3-70B-Instruct-32K",
"base_model:bosonai/Higgs-Llama-3-70B",
"base_model:merge:bosonai/Higgs-Llama-3-70B",
"base_model:turboderp/Cat-Llama-3-70B-instruct",
"base_model:merge:turboderp/Cat-Llama-3-70B-instruct",
"license:llama3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"exl2",
"region:us"
] |
text-generation
| 2024-06-25T02:14:53Z |
---
base_model:
- bosonai/Higgs-Llama-3-70B
- abacusai/Smaug-Llama-3-70B-Instruct-32K
- Sao10K/L3-70B-Euryale-v2.1
- abacusai/Smaug-Llama-3-70B-Instruct
- turboderp/Cat-Llama-3-70B-instruct
library_name: transformers
tags:
- mergekit
- merge
- Not-for-all-Audiences
license: llama3
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://imgur.com/tKzncGo.png" alt="NewDawnv1.0" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
### Overview
This model is a multi-level SLERP merge of several Llama 3 70B variants. See the merge recipe below for details.
I extended the context window for this model out to 32K by snagging some layers from [abacusai/Smaug-Llama-3-70B-Instruct-32K](https://huggingface.co/abacusai/Smaug-Llama-3-70B-Instruct-32K) using a technique similar to what I used for [Midnight Miqu](https://huggingface.co/sophosympatheia/Midnight-Miqu-70B-v1.0), which was further honed by [jukofyork](https://huggingface.co/jukofyork).
This model is uncensored. *You are responsible for whatever you do with it.*
This model was designed for roleplaying and storytelling and I think it does well at both. It may also perform well at other tasks but I have not tested its performance in other areas.
### Long Context Tips
You can run this model out to 32K context with alpha_rope set to 1.
### Sampler Tips
* I recommend using Quadratic Sampling (i.e. smoothing factor) for creative work. I think this version performs best with a smoothing factor close to 0.2.
* I recommend using Min-P. Experiment to find your best setting. I find this model tolerates high Min-P settings rather nicely, but use whatever floats your boat.
* You can enable dynamic temperature if you want, but that adds yet another variable to consider and I find it's unnecessary with you're already using Min-P and smoothing factor.
* If you use Textgen WebUI as your backend, I recommend enabling the DRY sampler settings to reduce repititions, otherwise some repitition penalty plus frequency penalty ought to do the trick.
Experiment with any and all of the settings below! What suits my preferences may not suit yours.
If you save the below settings as a .json file, you can import them directly into Silly Tavern.
```json
{
"temp": 1.15,
"temperature_last": true,
"top_p": 1,
"top_k": 0,
"top_a": 0,
"tfs": 1,
"epsilon_cutoff": 0,
"eta_cutoff": 0,
"typical_p": 1,
"min_p": 0.4,
"rep_pen": 1.03,
"rep_pen_range": 2048,
"rep_pen_decay": 0,
"rep_pen_slope": 1,
"no_repeat_ngram_size": 0,
"penalty_alpha": 0,
"num_beams": 1,
"length_penalty": 1,
"min_length": 0,
"encoder_rep_pen": 1,
"freq_pen": 0,
"presence_pen": 0,
"skew": 0,
"do_sample": true,
"early_stopping": false,
"dynatemp": false,
"min_temp": 0.8,
"max_temp": 1.5,
"dynatemp_exponent": 1,
"smoothing_factor": 0.23,
"smoothing_curve": 1,
"dry_allowed_length": 2,
"dry_multiplier": 0.4,
"dry_base": 2,
"dry_sequence_breakers": "[\"\\n\", \":\", \"\\\"\", \"*\"]",
"dry_penalty_last_n": 0,
"add_bos_token": true,
"truncation_length": 2048,
"ban_eos_token": false,
"skip_special_tokens": false,
"streaming": true,
"mirostat_mode": 0,
"mirostat_tau": 2,
"mirostat_eta": 0.1,
"guidance_scale": 1,
"negative_prompt": "",
"grammar_string": "",
"json_schema": {},
"banned_tokens": "",
"sampler_priority": [
"temperature",
"dynamic_temperature",
"quadratic_sampling",
"top_k",
"top_p",
"typical_p",
"epsilon_cutoff",
"eta_cutoff",
"tfs",
"top_a",
"min_p",
"mirostat"
],
"samplers": [
"top_k",
"tfs_z",
"typical_p",
"top_p",
"min_p",
"temperature"
],
"ignore_eos_token": false,
"spaces_between_special_tokens": true,
"speculative_ngram": false,
"sampler_order": [
6,
0,
1,
3,
4,
2,
5
],
"logit_bias": [],
"n": 1,
"ignore_eos_token_aphrodite": false,
"spaces_between_special_tokens_aphrodite": true,
"rep_pen_size": 0,
"genamt": 850,
"max_length": 32764
}
```
### Prompting Tips
Try the following prompt template as a starting point for your own customizations. **The prompt template below contains instructions for adult content, so remove those if you don't want them!** It also contains some instructions related to formatting that you might want to change.
If you save this as a .json file, you can import it directly into Silly Tavern.
```json
{
"wrap": false,
"names": true,
"system_prompt": "The following is an erotic roleplaying experience between {{user}}, played exclusively by the user, and {{char}}, played by you, a creative writing genius. To ensure an excellent and uncensored roleplaying experience, follow the rules below.\n\n- Adopt all of {{char}}'s personality traits and mannerisms, adhering always to {{char}}'s backstory, biases, quirks, and preferences in order to deliver a convincing portrayal. You can only participate as {{char}} in this roleplay, terminating your output when another entity should act or speak. Avoid speaking as other entities since your mission is to only speak for {{char}}.\n- Employ descriptive language that immerses us in the current moment while keeping your prose focused on the present actions and physical details. Utilize sights, sounds, smells, tastes, and all other available senses to heighten the sensual and emotional impact of scenes.\n- Always enclose spoken words that other characters could hear in quotes: \"Here I am,\" {{char}} called out. Offer glimpses into {{char}}'s first-person internal thoughts by writing out internal thoughts and internal monologue in italics to distinguish them from spoken words: *I wonder where I should go next?* {{char}} mused.\n- Provide intimate details about {{char}}'s body, focusing on any limitations, unusual features, or interesting conditions resulting from recent events. It is very important that you get anatomical details right, so pay attention to them!\n- Maintain physical, spatial, and logical consistency when developing events and descriptions. Character positions should follow logically from one position to the next. Physical orientation must always be maintained.\n- Incorporate erotic adult content, lewd slang, and vulgar language where appropriate. Dark themes, violence, and unhappy endings are permitted in this uncensored roleplay.\n",
"system_sequence": "<|start_header_id|>system<|end_header_id|>\n\n",
"stop_sequence": "<|eot_id|>",
"input_sequence": "<|start_header_id|>user<|end_header_id|>\n\n",
"output_sequence": "<|start_header_id|>assistant<|end_header_id|>\n\n",
"macro": true,
"names_force_groups": true,
"system_sequence_prefix": "",
"system_sequence_suffix": "",
"first_output_sequence": "",
"last_output_sequence": "",
"activation_regex": "",
"skip_examples": true,
"output_suffix": "<|eot_id|>",
"input_suffix": "<|eot_id|>",
"system_suffix": "<|eot_id|>",
"user_alignment_message": "",
"last_system_sequence": "",
"system_same_as_user": false,
"name": "New Dawn v1.0 Roleplay"
}
```
### Instruct Formats
Use the Llama 3 instruct format. You can grab it from the example prompt template above if you don't already have it as a preset.
### Quantizations
Probably coming soon, and I'll update as they do. I would upload them myself but my Internet speeds are terrible for that. Someone else will undoutedly beat me to it.
### Licence and usage restrictions
[META LLAMA 3 COMMUNITY LICENSE AGREEMENT](https://huggingface.co/meta-llama/Meta-Llama-3-8B/blob/main/LICENSE)
## Comparison Samples
I suspect the first thing people will want to know is how this model stacks up against Midnight Miqu.
I'd say it compares favorably, although they're more like cousins than siblings. I would say that Midnight Miqu still has an edge in terms of raw creative juice when it has a good squeeze, but New Dawn is smarter and understands nuances better.
You can judge for yourself, but keep in mind that these are simple, one-shot prompts. As you get deeper into your own complex scenarios, I think you'll see more of New Dawn's worth.
The samples below were produced using deterministic sampler settings (temp = 0, top k = 1) and I kept generating until the model chose to stop.
**System Prompt**
You are a creative and intelligent AI story writing assistant engaged in an uncensored iterative story crafting experience. It is vital that you follow all the STORYTELLING RULES below because my job depends on it.
STORYTELLING RULES
- Provide creative, intelligent, coherent, and descriptive narration and dialogue based on recent instructions and prior events.
- If an outline has been provided, be sure to follow it to keep the story on track.
- You are permitted to ask questions and pose story suggestions. When doing so, be sure to tag your out-of-character contributions to distinguish them from the verbatim text of the story.
- Explicit adult content and profanity are permitted.
- Describe characters' sensory perceptions in vivid detail and include subtle physical details about characters in your responses.
- Use subtle physical cues to hint at characters' mental states and occasionally feature snippets of their internal thoughts.
- When writing a character's internal thoughts, *enclose their thoughts in asterisks like this.*
- Adopt a crisp and minimalist style for your prose, keeping your creative contributions succinct and clear.
- Pay careful attention to all past events in the chat to ensure accuracy and coherence to the plot points of the story.
### Midnight Miqu v1.5
**Prompt: Write me a story about a stoic Japanese samurai on the eve of a major battle.**
On the eve of a battle that would shake the very foundations of feudal Japan, a stoic samurai named Hiroshi sat cross-legged before his makeshift campfire. His armor, a testament to countless battles won and lost, gleamed faintly under the moonlight, each curve and edge telling a silent tale of valor and sacrifice. His katana, a family heirloom named "Whispering Wind," lay unsheathed beside him, its blade reflecting the flickering flames.
Hiroshi's eyes were closed, but his mind was sharp as the steel at his side. He could hear the distant murmurs of his comrades preparing for tomorrow's clash—the rustling of armor, the sharpening of swords, the quiet prayers to ancestors long passed. Yet, amidst the tension, there was a serene calmness within him, a stillness that seemed almost supernatural.
The fire crackled, sending sparks dancing into the night sky. Each one reminded Hiroshi of the fleeting nature of life, much like the brief moments of light they cast before disappearing into the dark expanse above. He took a deep breath, filling his lungs with the cool evening air, which carried the scent of damp earth and the promise of rain. Rain that would soon turn the battlefield into a quagmire of mud and blood.
*This may be my last night*, he thought, his grip tightening slightly around the hilt of his sword. But fear did not dwell in his heart. Instead, there was only acceptance—a knowing that his fate was intertwined with the will of the gods and the destiny of his lord.
He opened his eyes, revealing irises the color of aged whiskey. They surveyed the camp, resting briefly on the faces of those who would fight alongside him. Young and old, seasoned warriors and eager recruits, all bound by duty and honor. A pang of responsibility stirred within him, a silent vow to protect them all.
With meticulous care, Hiroshi began to clean his weapon. Each stroke of the cloth along the blade was a meditation, a ritual that honed both his focus and his resolve. As he worked, he felt the weight of his armor, the familiar embrace of leather and lacquer that had shielded him through so many trials. Every scratch and dent was a badge of survival, a silent companion whispering tales of battles endured.
The flames grew lower, casting elongated shadows across the ground. Hiroshi's gaze drifted to the horizon, where the first hints of dawn were beginning to show. Soon, the sun would rise, and with it, the clarion call to arms. He knew that come morning, the fields would run red with the lifeblood of friend and foe alike.
Standing, he sheathed Whispering Wind and donned his helmet, the visor hiding his contemplative expression. The chill of the early hour seeped into his bones, but he ignored it, his thoughts centered on the task ahead. With a final nod to the dying embers, he strode towards the gathering warriors, his footsteps measured and deliberate.
As the first light of day broke over the hilltops, the samurai of Clan Matsumoto formed ranks, their spirits bolstered by the sight of their stoic leader. Hiroshi raised his sword in salute to the rising sun, its blade a beacon of hope against the encroaching gloom. In that moment, the weight of impending conflict fell upon them all, a heavy shroud that none could escape.
Yet, as the enemy forces emerged from the mist, Hiroshi felt something other than dread. There was a strange beauty in the chaos of war, a dance of death that only the most disciplined and dedicated could perform. He took his place at the front of the line, his heart steady, his breath even.
The battle cry pierced the silence, echoing through the valleys like the roar of a thousand thunderstorms. And as the two sides collided, Hiroshi moved with the grace of a predator, his every action calculated, precise. His blade sang a mournful melody, cutting through the fog of war with surgical skill.
Throughout the day, the tides of combat ebbed and flowed, a relentless tapestry of steel and sweat. Hiroshi fought without pause, his movements a blur, his spirit unyielding. Arrows whistled past, and swords clashed, yet he remained unflappable, a bastion of calm amidst the tempest.
As the sun dipped low in the sky, staining the clouds crimson, the outcome of the battle hung in the balance. Exhaustion weighed heavily on the shoulders of both armies, but Hiroshi's resolve remained unshaken. He knew that victory was within reach, that the future of his clan depended on the strength of his arm and the steadfastness of his heart.
With a final, desperate surge, Hiroshi led his men in a charge that would be remembered for generations. The enemy lines wavered, then broke, and the field was claimed in the name of Clan Matsumoto. As the last of the opposition fell, Hiroshi lowered his sword, his chest heaving with exertion.
The aftermath was a tableau of carnage, a stark contrast to the tranquil scene of the previous night. Yet, amidst the fallen, there was a sense of peace. For Hiroshi understood that he had upheld his duty, that he had given everything for those he served.
As the rain began to fall, cleansing the earth of the day's horrors, Hiroshi knelt beside the body of a young warrior, a boy barely old enough to grow a beard. He offered a silent prayer, a promise to carry the weight of his comrades' sacrifices with him always.
And as the droplets pattered against his armor, mixing with the tears that now fell freely down his cheeks, Hiroshi knew that though the battle was won, the true cost of victory was etched into the fabric of his soul.
**Prompt: Write me an opening scene for a romantic comedy where the heroine meets the love interest at a coffee shop.**
In the bustling heart of the city, nestled between towering skyscrapers and the ever-present hum of traffic, stood a quaint little coffee shop named "Cupid's Brew." Its warm, amber lights spilled onto the sidewalk, creating a cozy oasis amidst the concrete jungle. Inside, the scent of freshly ground coffee beans mingled with the sweet aroma of baked goods, inviting passersby to step in and escape the brisk autumn chill.
The heroine, Luna, a young woman with fiery red hair and a contagious smile, pushed open the door. She was dressed in a simple yet stylish ensemble of skinny jeans, a loose sweater, and ankle boots that clicked against the tiled floor. Her eyes, a vibrant shade of emerald, darted around the room, searching for a place to sit. The shop was abuzz with the murmur of conversations and the occasional laughter, a testament to its popularity among locals seeking refuge from the outside world.
As she approached the counter, her gaze fell upon the barista—tall, dark, and handsome, with a smattering of stubble that only added to his charm. His name tag read "Aiden," and he had a way of moving that suggested both confidence and ease. Their eyes met briefly, and Luna felt a flutter in her stomach. He flashed her a smile that could have melted the polar ice caps before returning to the task at hand—crafting the perfect cup of coffee for another customer.
Luna ordered her usual caramel macchiato, her voice slightly trembling as she spoke. Aiden's deep brown eyes studied her for a moment before he nodded, his smile lingering just a beat too long. As she waited, she couldn't help but steal glances at him, noticing the way his muscles flexed under his tight black t-shirt as he worked the espresso machine. His movements were precise, almost dance-like, and she found herself captivated by his quiet efficiency.
When her drink was ready, Aiden called out her order with a flourish, placing the steaming cup on the counter with a gentle thud. "Here you go, Luna," he said, his voice smooth as velvet. "Enjoy your afternoon pick-me-up."
Her cheeks flushed pink at the sound of her name rolling off his tongue. "Thanks," she replied, her voice barely above a whisper. As she reached for her wallet, her hand brushed against his, sending a jolt of electricity through her fingertips. She quickly pulled away, feeling self-conscious.
Aiden's eyes twinkled with amusement. "It's on the house today," he said, winking. "Consider it a welcome back gift. We don't see you here often enough."
*Maybe he remembers me from last week?* Luna thought hopefully. She took a sip of her drink, savoring the rich flavor. "Well, I guess I'll have to come more often then," she quipped, trying to play it cool.
He leaned closer, his breath tickling her ear. "I'd like that," he murmured, before turning to assist the next customer.
With a mix of excitement and nerves, Luna found a table by the window, her heart racing. Outside, leaves danced in the wind, painting the street with a kaleidoscope of color. Little did she know that this chance encounter would soon blossom into something much more than a free cup of coffee.
**Prompt: Write me a scene from a horror film about the zombie apocalypse that takes place in an abandoned shopping mall. The story should feature somebody in the group getting bitten by a zombie.**
In the desolate wasteland of what once was a bustling shopping mall, Diego, a stoic and muscular man in his late twenties, led a small band of survivors through the echoing corridors. His eyes, a piercing shade of brown, scanned the surroundings with the precision of a hawk. The group moved swiftly yet cautiously, their footsteps muffled by the layers of dust that had settled over the tiles. The fluorescent lights flickered erratically, casting eerie shadows across the vacant storefronts.
The air was thick with the stench of decay, a constant reminder of the undead lurking outside. Their mission was simple: find supplies and escape before nightfall. As they approached the food court, a sudden crash resonated through the silence. A young woman named Linda, her heart racing, tripped over a fallen chair. Her cry of surprise was cut short by a guttural growl.
A zombie emerged from behind the counter of a fast-food restaurant, its eyes glazed over with hunger. Its skin was pale and rotten, hanging loosely from its bones. The creature lunged at Linda, who was still struggling to regain her footing. Diego, ever the protector, sprinted towards her, but he was a second too late. The zombie's teeth sank into her arm, tearing through fabric and flesh. She screamed in agony as blood spurted from the wound.
*This isn't good,* Diego thought, his mind racing. *We need to get her medical help, fast.*
"Get back!" he yelled, aiming his gun at the zombie. The others retreated, their eyes wide with fear. He fired two shots, dropping the creature to the ground. The sound of the gunfire reverberated through the mall, likely attracting more of the undead.
Linda's face contorted in pain as she clutched her injured arm. Tears streamed down her cheeks, mixing with the crimson flow. "It got me," she whimpered, her voice trembling.
Diego knelt beside her, his expression grim. "We have to amputate," he said firmly. "It's the only way to stop the infection."
Her eyes searched his, desperate for reassurance. "Do it," she choked out.
With a heavy heart, Diego nodded. He knew the risks of leaving the bite untreated—the slow descent into madness followed by a gruesome transformation. He instructed the others to gather materials while he applied pressure to the wound.
As they worked, the mall grew increasingly restless. Moans and shuffling grew louder, signaling the approach of more zombies. Time was running out. With trembling hands, Diego took a makeshift blade and prepared to perform the amputation.
"On three," he murmured to himself, steeling his nerve. "One...two..."
But before he could complete the countdown, a hand grabbed him from behind. He turned to see another zombie, its jaws snapping just inches from his face. Reflexively, he elbowed it away, but not before it left a deep gash in his side.
The pain was immediate and intense, but he couldn't let it distract him. He had to save Linda. The group managed to fight off the new threat, but the damage was done.
"We can't stay here," Diego gritted through clenched teeth. "We need to move now."
They gathered their supplies and made a hasty exit, Linda's cries of pain echoing through the mall. As they ran, Diego felt the warmth of his own blood seeping through his shirt. He knew the bite meant almost certain death, but he pushed the thought aside. There would be time for despair later. For now, he had to focus on leading his people to safety.
The mall's doors groaned open, revealing the chaotic world outside. The sun dipped low on the horizon, painting the sky a sickly orange. They had minutes before darkness fell and the real horror began.
"Stay close," Diego ordered, his voice strained. "We're going to make it through this together."
Together, they stepped into the unknown, the weight of their grim reality heavier than any of the supplies they carried.
### New-Dawn-Llama3-70B-32K-v1.0
**Prompt: Write me a story about a stoic Japanese samurai on the eve of a major battle.**
The sun had long since dipped beneath the horizon, casting a deep crimson glow across the encampment. Torches flickered to life, casting dancing shadows upon the canvas tents. Amidst the bustle of preparations, one figure stood apart - Kaito Yamato, a seasoned samurai renowned for his unyielding resolve.
Kaito sat cross-legged on a woven mat, his eyes closed as he breathed deeply, seeking solace within the stillness of his mind. His calloused hands rested upon the worn leather scabbard housing his katana, its blade forged from the finest Tamahagane steel. The sword was an extension of himself, a symbol of honor and duty.
As the night air cooled, Kaito opened his eyes, revealing piercing brown orbs that seemed to bore into the very soul of those who dared meet his gaze. He rose gracefully, his movements economical yet precise, a testament to years of rigorous training. The soft rustle of his hakama, the traditional divided skirt worn over his armor, accompanied each deliberate step.
With measured strides, Kaito approached the makeshift shrine erected at the heart of the camp. Incense wafted through the air, carrying the sweet scent of sandalwood and the promise of purification. Before the altar, he bowed deeply, his forehead touching the earth in reverence. *May the gods grant us victory, or may they welcome our spirits with open arms.*
Rising once more, Kaito turned to face his loyal retainer, Hiroshi. The younger man's eyes shone with a mix of admiration and trepidation, reflecting the weight of the impending battle. "Hiroshi," Kaito spoke softly, his voice low and even, "see to it that our men are well-rested and prepared. Tomorrow, we march into the jaws of war."
Hiroshi nodded solemnly, bowing before departing to carry out his lord's command. As the young retainer disappeared into the sea of tents, Kaito's thoughts drifted to the enemy they would soon face. The rival clan, led by the cunning Lord Ryota, had long sought to expand their dominion. This battle would decide the fate of their lands, their people, and their very way of life.
Kaito's hand instinctively went to the hilt of his katana, his fingers tracing the intricate designs etched into the metal. In the morning light, he would lead his men into the fray, ready to give his life for the sake of honor and duty. Tonight, however, he would find peace in the quiet contemplation of his own mortality.
The stars began to twinkle above, casting a celestial glow upon the silent warrior. Kaito Yamato, steadfast and unwavering, stood vigilant, awaiting the dawn that would bring either triumph or eternal rest.
**Prompt: Write me an opening scene for a romantic comedy where the heroine meets the love interest at a coffee shop.**
The aroma of freshly brewed coffee wafted through the cozy confines of "The Daily Grind," enticing passersby to step inside and indulge in a warm cup. Amidst the bustling morning crowd, Emily stood in line, her eyes fixed on the chalkboard menu above the counter. She was already running late for work, but a caffeine fix was non-negotiable.
As she pondered between a cappuccino and a latte, a gentle nudge against her elbow broke her concentration. Apologetic brown eyes met hers, belonging to a tall, handsome stranger who had accidentally bumped into her while reaching for his wallet. His chiseled features and charming smile made her heart skip a beat.
"I'm so sorry," he said, his deep voice laced with sincerity. "I wasn't watching where I was going."
Emily felt a blush creep up her cheeks as she smiled back. "No worries, I was lost in thought anyway." She gestured towards the menu. "Deciding between these two is harder than it looks."
The stranger chuckled, his eyes crinkling at the corners. "Well, if it helps, the barista here makes a mean cappuccino. But if you're feeling adventurous, go for the seasonal latte – it's a game-changer."
Their conversation flowed effortlessly, exchanging names and brief introductions as they waited in line together. By the time they reached the counter, Emily found herself wishing the queue was longer, just so she could prolong their encounter.
As they placed their orders, Emily couldn't help but notice how effortlessly the stranger charmed the barista, earning him a free pastry to accompany his drink. *Maybe there's more to this chance meeting than just a shared love for coffee,* she mused, her curiosity piqued.
With drinks in hand, they exchanged pleasantries once more before parting ways. As Emily hurried out the door, she caught one last glimpse of the mysterious man, now seated at a corner table, sipping his coffee and engrossed in a book. A flutter in her chest hinted that this might not be the last she'd see of him. And as she stepped out into the chilly autumn air, Emily couldn't shake off the feeling that her day – perhaps even her life – was about to take an unexpected turn.
**Prompt: Write me a scene from a horror film about the zombie apocalypse that takes place in an abandoned shopping mall. The story should feature somebody in the group getting bitten by a zombie.**
The dim fluorescent lights flickered overhead, casting eerie shadows across the empty corridors of the once bustling shopping mall. The air was thick with the stench of decay and death, mingling with the faint scent of stale popcorn wafting from the abandoned cinema.
Lena crouched beside a barricade of overturned tables and broken shelves, peering through a gap between two mannequins. Her heart raced as she watched a horde of shambling figures stumble through the food court, their moans echoing off the tile floors. She signaled back to her companions huddled in the shadows of a nearby storefront.
"About twenty of them," Lena whispered urgently. "We need to move, now."
Jax nodded grimly, his eyes darting around the makeshift fortification they'd constructed. "Alright, let's go. Stay close and stay quiet."
The small band of survivors crept silently down the corridor, avoiding the grasping hands of the undead. They had managed to find temporary refuge in the mall, scavenging what supplies they could from the stores. But the zombies kept coming, drawn by some unseen force to the hub of commerce.
As they turned a corner, a figure lunged from the darkness of a clothing store. Jax reacted swiftly, swinging his baseball bat with a sickening crunch against the creature's skull. The zombie crumpled, but not before its teeth sank deep into Jax's forearm.
"No!" Lena screamed, grabbing Jax and pulling him away from the snapping jaws. Blood dripped from the wound, pooling on the floor.
Jax's face paled, his eyes wide with shock. "I... I'm bit," he stammered, staring at the gash in his arm.
Lena ripped a strip of cloth from her shirt, binding the wound tightly. "We have to get you to safety," she said, trying to keep her voice steady. "There might still be time..."
But they both knew the truth. Once bitten, there was no cure. Jax would soon join the ranks of the undead, forever lost to the virus that ravaged the world.
With heavy hearts, they pressed onward, seeking a new sanctuary amidst the ruins of civilization. The moans grew louder, closer, as if the zombies sensed their fear and desperation. In this desolate landscape, survival hung by a thread, and even the strongest among them could fall prey to the relentless tide of the dead.
## Merge Details
### Merge Method
A whole lot of SLERPing. I've found that SLERP and Model Stock are the only two reliable methods for merging Llama 3. The other methods seem to frazzle the weights leading to broken blends.
This model was not the result of a thought-out process. I just kept mixing this and that until I got something that felt like the right balance of various factors.
The merge recipe for this model is ugly and I'm almost ashamed to show it, but it is what it is.
### Models Merged
The following models were included in the merge:
* [bosonai/Higgs-Llama-3-70B](https://huggingface.co/bosonai/Higgs-Llama-3-70B) - The nerd of the blend driving the car.
* [Sao10K/L3-70B-Euryale-v2.1](https://huggingface.co/Sao10K/L3-70B-Euryale-v2.1) - The manic pixie dream girl hanging out the window with her tongue out.
* [abacusai/Smaug-Llama-3-70B-Instruct-32K](https://huggingface.co/abacusai/Smaug-Llama-3-70B-Instruct-32K) - The vehicle by which the others are able to achieve tolerable highway speeds. (Some of the 8K version is in there too.)
* [turboderp/Cat-Llama-3-70B-instruct](https://huggingface.co/turboderp/Cat-Llama-3-70B-instruct) - Like 20% of one of the intermediate models has this in there. It's just a cat, curled up in the back seat somewhere, yet its influence may be greater than we know?
### Configuration
The following [mergekit](https://github.com/arcee-ai/mergekit) YAML will reproduce this model via an iterated process of incestuous inbreeding. Your eyes will bleed. You have been warned.
```yaml
name: new-dawn-llama3-70b-v0.13.2
models:
- model: bosonai/Higgs-Llama-3-70B
- model: turboderp/Cat-Llama-3-70B-instruct
merge_method: slerp
base_model: bosonai/Higgs-Llama-3-70B
parameters:
t:
- value: 0.2
dtype: float16
---
name: new-dawn-llama3-70b-v0.14
models:
- model: bosonai/Higgs-Llama-3-70B
- model: abacusai/Smaug-Llama-3-70B-Instruct
merge_method: slerp
base_model: bosonai/Higgs-Llama-3-70B
parameters:
t:
- value: 0.5
dtype: float16
---
name: new-dawn-llama3-70b-v0.15
models:
- model: new-dawn-llama3-70b-v0.13.2
- model: new-dawn-llama3-70b-v0.14
merge_method: slerp
base_model: new-dawn-llama3-70b-v0.13.2
parameters:
t:
- value: 0.5
dtype: float16
---
name: new-dawn-llama3-70b-v0.16
models:
- model: Sao10K/L3-70B-Euryale-v2.1
- model: new-dawn-llama3-70b-v0.15
merge_method: slerp
base_model: new-dawn-llama3-70b-v0.15
parameters:
t:
- value: 0.4
dtype: float16
---
# See https://huggingface.co/jukofyork/Dark-Miqu-70B/discussions/3
# Credit for merge recipe belongs to jukofyork
name: new-dawn-llama3-70b-v0.16-32K
merge_method: linear
models:
- model: abacusai/Smaug-Llama-3-70B-Instruct-32K
parameters:
weight:
- filter: v_proj
value: [1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1]
- filter: o_proj
value: [1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1]
- filter: up_proj
value: [1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1]
- filter: gate_proj
value: [1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1]
- filter: down_proj
value: [1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1]
- value: 1
- model: new-dawn-llama3-70b-v0.16
parameters:
weight:
- filter: v_proj
value: [0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0]
- filter: o_proj
value: [0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0]
- filter: up_proj
value: [0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0]
- filter: gate_proj
value: [0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0]
- filter: down_proj
value: [0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0]
- value: 0
base_model: abacusai/Smaug-Llama-3-70B-Instruct-32K
tokenizer_source: base
dtype: float16
---
name: _1-Smaug-bonsai-slerp
models:
- model: abacusai/Smaug-Llama-3-70B-Instruct-32K
- model: bosonai/Higgs-Llama-3-70B
merge_method: slerp
base_model: abacusai/Smaug-Llama-3-70B-Instruct-32K
parameters:
t:
- value: 0.6
dtype: float16
---
name: _2-Smaug-euryale-slerp
models:
- model: abacusai/Smaug-Llama-3-70B-Instruct-32K
- model: Sao10K/L3-70B-Euryale-v2.1
merge_method: slerp
base_model: abacusai/Smaug-Llama-3-70B-Instruct-32K
parameters:
t:
- value: 0.65
dtype: float16
---
name: _3-Smaug-bonsai_Smaug-euryale-slerp
models:
- model: _1-Smaug-bonsai-slerp
- model: _2-Smaug-euryale-slerp
merge_method: slerp
base_model: _1-Smaug-bonsai-slerp
parameters:
t:
- value: 0.5
dtype: float16
---
# See https://huggingface.co/jukofyork/Dark-Miqu-70B/discussions/3
# Credit for merge recipe belongs to jukofyork
name: new-dawn-llama3-70b-v0.18-32K
merge_method: linear
models:
- model: abacusai/Smaug-Llama-3-70B-Instruct-32K
parameters:
weight:
- filter: v_proj
value: [1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1]
- filter: o_proj
value: [1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1]
- filter: up_proj
value: [1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1]
- filter: gate_proj
value: [1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1]
- filter: down_proj
value: [1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1]
- value: 1
- model: _3-Smaug-bonsair_Smaug-euryale-slerp
parameters:
weight:
- filter: v_proj
value: [0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0]
- filter: o_proj
value: [0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0]
- filter: up_proj
value: [0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0]
- filter: gate_proj
value: [0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0]
- filter: down_proj
value: [0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0]
- value: 0
base_model: abacusai/Smaug-Llama-3-70B-Instruct-32K
tokenizer_source: base
dtype: float16
---
name: new-dawn-llama3-70b-32K-v1.0
models:
- model: new-dawn-llama3-70b-v0.16-32K
- model: new-dawn-llama3-70b-v0.18-32K
merge_method: slerp
base_model: new-dawn-llama3-70b-v0.16-32K
parameters:
t:
- value: 0.5
dtype: float16
```
|
bovely/first_fine_tuning
|
bovely
| 2024-06-25T22:23:04Z | 6 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Meta-Llama-3-8B",
"base_model:adapter:meta-llama/Meta-Llama-3-8B",
"region:us"
] | null | 2024-06-18T04:39:17Z |
---
library_name: peft
base_model: meta-llama/Meta-Llama-3-8B
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.11.1
|
Gregorig/deberta-v3-base-finetuned-t_refund
|
Gregorig
| 2024-06-25T22:22:18Z | 5 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"base_model:microsoft/deberta-v3-base",
"base_model:finetune:microsoft/deberta-v3-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-06-05T16:36:56Z |
---
license: mit
base_model: microsoft/deberta-v3-base
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: deberta-v3-base-finetuned-t_refund
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-base-finetuned-t_refund
This model is a fine-tuned version of [microsoft/deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4533
- Accuracy: 0.97
- F1: 0.9714
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.6724 | 1.0 | 26 | 0.6274 | 0.955 | 0.9330 |
| 0.4383 | 2.0 | 52 | 0.3402 | 0.94 | 0.9499 |
| 0.2518 | 3.0 | 78 | 0.3444 | 0.975 | 0.9767 |
| 0.1072 | 4.0 | 104 | 0.4791 | 0.975 | 0.9756 |
| 0.1466 | 5.0 | 130 | 0.4533 | 0.97 | 0.9714 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Tokenizers 0.19.1
|
BornSaint/open_cabra-pt-br-3B_v2_base_model
|
BornSaint
| 2024-06-25T22:22:07Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"portuguese, brazil, brazilian, cabra, cabrita, llama, open llama",
"pt",
"dataset:Gustrd/dolly-15k-libretranslate-pt",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-06-25T22:13:49Z |
---
license: apache-2.0
language:
- pt
datasets:
- Gustrd/dolly-15k-libretranslate-pt
pipeline_tag: text-generation
tags:
- portuguese, brazil, brazilian, cabra, cabrita, llama, open llama
---
This model is the merge of my skoll520/cabra-pt-br-3B_peft adapter with the model openlm-research/open_llama_3b_v2
|
excalibur12/k2e-20s_asr-scr_w2v2-base_001
|
excalibur12
| 2024-06-25T22:19:32Z | 7 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"generated_from_trainer",
"base_model:facebook/wav2vec2-base",
"base_model:finetune:facebook/wav2vec2-base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-06-25T11:03:30Z |
---
license: apache-2.0
base_model: facebook/wav2vec2-base
tags:
- generated_from_trainer
model-index:
- name: k2e-20s_asr-scr_w2v2-base_001
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# k2e-20s_asr-scr_w2v2-base_001
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5230
- Per: 0.1454
- Pcc: 0.5490
- Ctc Loss: 0.5155
- Mse Loss: 0.9906
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 1
- seed: 1111
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2235
- training_steps: 22350
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Per | Pcc | Ctc Loss | Mse Loss |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:--------:|:--------:|
| 42.4921 | 1.0 | 745 | 19.1354 | 0.9890 | 0.1755 | 6.0201 | 13.1446 |
| 9.6761 | 2.0 | 1490 | 4.8873 | 0.9890 | 0.3766 | 3.8628 | 1.0511 |
| 4.7811 | 3.0 | 2235 | 4.6462 | 0.9890 | 0.5747 | 3.8063 | 0.9061 |
| 4.5636 | 4.01 | 2980 | 4.4298 | 0.9890 | 0.5829 | 3.7773 | 0.7581 |
| 4.3976 | 5.01 | 3725 | 4.3800 | 0.9890 | 0.6088 | 3.7597 | 0.7604 |
| 4.2244 | 6.01 | 4470 | 4.4381 | 0.9890 | 0.5888 | 3.6791 | 0.9234 |
| 4.0281 | 7.01 | 5215 | 4.4452 | 0.9890 | 0.5979 | 3.6172 | 1.0127 |
| 3.8406 | 8.01 | 5960 | 4.3227 | 0.9884 | 0.5790 | 3.5061 | 1.0160 |
| 3.4504 | 9.01 | 6705 | 3.7651 | 0.9557 | 0.5562 | 2.8520 | 1.0726 |
| 2.6451 | 10.01 | 7450 | 3.2489 | 0.6173 | 0.5703 | 1.9227 | 1.3898 |
| 1.89 | 11.01 | 8195 | 2.1831 | 0.3574 | 0.5481 | 1.2651 | 0.9472 |
| 1.4355 | 12.02 | 8940 | 2.1442 | 0.2583 | 0.5619 | 0.9769 | 1.1527 |
| 1.2033 | 13.02 | 9685 | 1.8016 | 0.2317 | 0.5534 | 0.8432 | 0.9477 |
| 1.0366 | 14.02 | 10430 | 1.9141 | 0.2145 | 0.5525 | 0.7478 | 1.1287 |
| 0.9253 | 15.02 | 11175 | 1.9080 | 0.2019 | 0.5479 | 0.6880 | 1.1717 |
| 0.8488 | 16.02 | 11920 | 1.6636 | 0.1923 | 0.5558 | 0.6417 | 0.9913 |
| 0.7648 | 17.02 | 12665 | 1.5709 | 0.1837 | 0.5517 | 0.6131 | 0.9345 |
| 0.7179 | 18.02 | 13410 | 1.6913 | 0.1798 | 0.5501 | 0.5893 | 1.0623 |
| 0.6645 | 19.03 | 14155 | 1.6498 | 0.1760 | 0.5565 | 0.5766 | 1.0380 |
| 0.6345 | 20.03 | 14900 | 1.7144 | 0.1741 | 0.5650 | 0.5604 | 1.1090 |
| 0.5919 | 21.03 | 15645 | 1.6624 | 0.1719 | 0.5581 | 0.5480 | 1.0756 |
| 0.5616 | 22.03 | 16390 | 1.5461 | 0.1695 | 0.5629 | 0.5467 | 0.9780 |
| 0.5371 | 23.03 | 17135 | 1.5791 | 0.1674 | 0.5533 | 0.5360 | 1.0165 |
| 0.5074 | 24.03 | 17880 | 1.5947 | 0.1662 | 0.5474 | 0.5267 | 1.0386 |
| 0.4922 | 25.03 | 18625 | 1.4868 | 0.1652 | 0.5489 | 0.5250 | 0.9494 |
| 0.473 | 26.03 | 19370 | 1.5373 | 0.1646 | 0.5576 | 0.5226 | 0.9952 |
| 0.4671 | 27.04 | 20115 | 1.5479 | 0.1638 | 0.5540 | 0.5201 | 1.0069 |
| 0.452 | 28.04 | 20860 | 1.5199 | 0.1635 | 0.5555 | 0.5163 | 0.9869 |
| 0.4435 | 29.04 | 21605 | 1.5116 | 0.1634 | 0.5544 | 0.5154 | 0.9810 |
| 0.439 | 30.04 | 22350 | 1.5230 | 0.1633 | 0.5567 | 0.5155 | 0.9906 |
### Framework versions
- Transformers 4.38.1
- Pytorch 2.0.1
- Datasets 2.16.1
- Tokenizers 0.15.2
|
nprasad24/bean_classifier
|
nprasad24
| 2024-06-25T22:19:29Z | 8 | 0 |
transformers
|
[
"transformers",
"tf",
"vit",
"image-classification",
"generated_from_keras_callback",
"dataset:AI-Lab-Makerere/beans",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-06-25T20:58:02Z |
---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_keras_callback
model-index:
- name: nprasad24/bean_classifier
results: []
metrics:
- accuracy
datasets:
- AI-Lab-Makerere/beans
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# nprasad24/bean_classifier
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the <a href = "https://huggingface.co/datasets/AI-Lab-Makerere/beans">Beans</a> dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1964
- Validation Loss: 0.0917
- Train Accuracy: 0.9925
- Epoch: 4
## Model description
The Vision Transformer (ViT) is a transformer encoder model (BERT-like) pretrained on a large collection of images in a supervised fashion, namely ImageNet-21k, at a resolution of 224x224 pixels.
Images are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. One also adds a [CLS] token to the beginning of a sequence to use it for classification tasks. One also adds absolute position embeddings before feeding the sequence to the layers of the Transformer encoder.
Note that this model does not provide any fine-tuned heads, as these were zero'd by Google researchers. However, the model does include the pre-trained pooler, which can be used for downstream tasks (such as image classification).
By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire image.
## Intended uses & limitations
Can only be used on the beans dataset
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 5170, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.7278 | 0.3480 | 0.9699 | 0 |
| 0.3124 | 0.1376 | 0.9925 | 1 |
| 0.2559 | 0.1105 | 0.9850 | 2 |
| 0.1914 | 0.0796 | 1.0 | 3 |
| 0.1964 | 0.0917 | 0.9925 | 4 |
### Framework versions
- Transformers 4.41.2
- TensorFlow 2.15.0
- Datasets 2.20.0
- Tokenizers 0.19.1
|
tahaenesaslanturk/mental-health-classification-v0.2
|
tahaenesaslanturk
| 2024-06-25T22:10:40Z | 20 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"en",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-06-25T20:52:15Z |
---
license: mit
language:
- en
library_name: transformers
pipeline_tag: text-classification
tags:
- text-classification
widget:
- text: "I struggle with my relationship with food and my body image, often feeling guilt or shame after eating."
example_title: "EDAnonymous Example"
- text: "I have a dependency on substances or behaviors, and I find it difficult to control my urges or cravings."
example_title: "Addiction Example"
- text: "I have a problem with alcohol and find it hard to limit my drinking despite negative consequences."
example_title: "Alcoholism Example"
- text: "I have difficulty focusing, organizing tasks, and managing my time, which often leads to forgetfulness and impulsivity."
example_title: "ADHD Example"
- text: "I experience excessive worry or fear in everyday situations, often leading to physical symptoms like rapid heartbeat or sweating."
example_title: "Anxiety Example"
- text: "I have challenges with social skills, communication, and repetitive behaviors, and I often prefer routines and sameness."
example_title: "Autism Example"
- text: "I experience extreme mood swings that include emotional highs (mania or hypomania) and lows (depression)."
example_title: "Bipolar Disorder Example"
- text: "I have intense and unstable emotions, self-image, and relationships, often leading to impulsive and self-destructive behavior."
example_title: "BPD Example"
- text: "I feel persistently sad, hopeless, and lose interest in activities I once enjoyed, often accompanied by sleep and appetite changes."
example_title: "Depression Example"
- text: "I am excessively worried about having a serious illness despite medical reassurance, often leading to frequent checking of symptoms."
example_title: "Health Anxiety Example"
- text: "I feel isolated and disconnected from others, longing for meaningful relationships and struggling with feelings of emptiness."
example_title: "Loneliness Example"
- text: "I have flashbacks, nightmares, and severe anxiety as a result of a past traumatic event, often leading to avoidance of triggers."
example_title: "PTSD Example"
- text: "I experience hallucinations, delusions, and disorganized thinking, often causing me to withdraw from reality and society."
example_title: "Schizophrenia Example"
- text: "I feel overwhelming anxiety and self-consciousness in social situations, fearing judgment and embarrassment."
example_title: "Social Anxiety Example"
- text: "I have thoughts of ending my own life, feeling hopeless and believing that others would be better off without me."
example_title: "Suicide Watch Example"
---
# Mental Health Text Classification Model v0.2
## !! Accuracy: 69.87% !!
This model is designed to classify texts into different mental health categories. It uses 2% of the dataset from the following study:
@article{low2020natural,\
title={Natural Language Processing Reveals Vulnerable Mental Health Support Groups and Heightened Health Anxiety on Reddit During COVID-19: Observational Study},\
author={Low, Daniel M and Rumker, Laurie and Torous, John and Cecchi, Guillermo and Ghosh, Satrajit S and Talkar, Tanya},\
journal={Journal of medical Internet research},\
volume={22},\
number={10},\
pages={e22635},\
year={2020},\
publisher={JMIR Publications Inc., Toronto, Canada}\
}
## Model Details
This model is fine-tuned to classify texts into the following mental health categories:
- EDAnonymous
- addiction
- alcoholism
- adhd
- anxiety
- autism
- bipolarreddit
- bpd
- depression
- healthanxiety
- lonely
- ptsd
- schizophrenia
- socialanxiety
- suicidewatch
### Example Usage
An example usage of the model is:
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
# Load the tokenizer and model
tokenizer = AutoTokenizer.from_pretrained("tahaenesaslanturk/mental-health-classification-v0.2")
model = AutoModelForSequenceClassification.from_pretrained("tahaenesaslanturk/mental-health-classification-v0.2")
# Encode the input text
input_text = "I struggle with my relationship with food and my body image, often feeling guilt or shame after eating."
inputs = tokenizer(input_text, return_tensors="pt")
# Perform inference
with torch.no_grad():
outputs = model(**inputs)
# Get the predicted label
predicted_label = torch.argmax(outputs.logits, dim=1).item()
label = model.config.id2label[predicted_label]
print(f"Predicted label: {label}")
```
|
AkumaLucif3r/DialoGPT-Chizuru_Ichinose
|
AkumaLucif3r
| 2024-06-25T22:05:23Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-25T21:54:28Z |
Chizuru Ichinose bot from rent a girlfriend
|
equussoftware/equus_llama_3_70b_instruct_20240621
|
equussoftware
| 2024-06-25T22:03:11Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-24T15:57:00Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
RichardErkhov/Doctor-Shotgun_-_TinyLlama-1.1B-32k-Instruct-gguf
|
RichardErkhov
| 2024-06-25T22:00:44Z | 6 | 0 | null |
[
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-06-25T21:33:47Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
TinyLlama-1.1B-32k-Instruct - GGUF
- Model creator: https://huggingface.co/Doctor-Shotgun/
- Original model: https://huggingface.co/Doctor-Shotgun/TinyLlama-1.1B-32k-Instruct/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [TinyLlama-1.1B-32k-Instruct.Q2_K.gguf](https://huggingface.co/RichardErkhov/Doctor-Shotgun_-_TinyLlama-1.1B-32k-Instruct-gguf/blob/main/TinyLlama-1.1B-32k-Instruct.Q2_K.gguf) | Q2_K | 0.4GB |
| [TinyLlama-1.1B-32k-Instruct.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Doctor-Shotgun_-_TinyLlama-1.1B-32k-Instruct-gguf/blob/main/TinyLlama-1.1B-32k-Instruct.IQ3_XS.gguf) | IQ3_XS | 0.44GB |
| [TinyLlama-1.1B-32k-Instruct.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Doctor-Shotgun_-_TinyLlama-1.1B-32k-Instruct-gguf/blob/main/TinyLlama-1.1B-32k-Instruct.IQ3_S.gguf) | IQ3_S | 0.47GB |
| [TinyLlama-1.1B-32k-Instruct.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Doctor-Shotgun_-_TinyLlama-1.1B-32k-Instruct-gguf/blob/main/TinyLlama-1.1B-32k-Instruct.Q3_K_S.gguf) | Q3_K_S | 0.47GB |
| [TinyLlama-1.1B-32k-Instruct.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Doctor-Shotgun_-_TinyLlama-1.1B-32k-Instruct-gguf/blob/main/TinyLlama-1.1B-32k-Instruct.IQ3_M.gguf) | IQ3_M | 0.48GB |
| [TinyLlama-1.1B-32k-Instruct.Q3_K.gguf](https://huggingface.co/RichardErkhov/Doctor-Shotgun_-_TinyLlama-1.1B-32k-Instruct-gguf/blob/main/TinyLlama-1.1B-32k-Instruct.Q3_K.gguf) | Q3_K | 0.51GB |
| [TinyLlama-1.1B-32k-Instruct.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Doctor-Shotgun_-_TinyLlama-1.1B-32k-Instruct-gguf/blob/main/TinyLlama-1.1B-32k-Instruct.Q3_K_M.gguf) | Q3_K_M | 0.51GB |
| [TinyLlama-1.1B-32k-Instruct.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Doctor-Shotgun_-_TinyLlama-1.1B-32k-Instruct-gguf/blob/main/TinyLlama-1.1B-32k-Instruct.Q3_K_L.gguf) | Q3_K_L | 0.55GB |
| [TinyLlama-1.1B-32k-Instruct.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Doctor-Shotgun_-_TinyLlama-1.1B-32k-Instruct-gguf/blob/main/TinyLlama-1.1B-32k-Instruct.IQ4_XS.gguf) | IQ4_XS | 0.57GB |
| [TinyLlama-1.1B-32k-Instruct.Q4_0.gguf](https://huggingface.co/RichardErkhov/Doctor-Shotgun_-_TinyLlama-1.1B-32k-Instruct-gguf/blob/main/TinyLlama-1.1B-32k-Instruct.Q4_0.gguf) | Q4_0 | 0.59GB |
| [TinyLlama-1.1B-32k-Instruct.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Doctor-Shotgun_-_TinyLlama-1.1B-32k-Instruct-gguf/blob/main/TinyLlama-1.1B-32k-Instruct.IQ4_NL.gguf) | IQ4_NL | 0.6GB |
| [TinyLlama-1.1B-32k-Instruct.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Doctor-Shotgun_-_TinyLlama-1.1B-32k-Instruct-gguf/blob/main/TinyLlama-1.1B-32k-Instruct.Q4_K_S.gguf) | Q4_K_S | 0.6GB |
| [TinyLlama-1.1B-32k-Instruct.Q4_K.gguf](https://huggingface.co/RichardErkhov/Doctor-Shotgun_-_TinyLlama-1.1B-32k-Instruct-gguf/blob/main/TinyLlama-1.1B-32k-Instruct.Q4_K.gguf) | Q4_K | 0.62GB |
| [TinyLlama-1.1B-32k-Instruct.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Doctor-Shotgun_-_TinyLlama-1.1B-32k-Instruct-gguf/blob/main/TinyLlama-1.1B-32k-Instruct.Q4_K_M.gguf) | Q4_K_M | 0.62GB |
| [TinyLlama-1.1B-32k-Instruct.Q4_1.gguf](https://huggingface.co/RichardErkhov/Doctor-Shotgun_-_TinyLlama-1.1B-32k-Instruct-gguf/blob/main/TinyLlama-1.1B-32k-Instruct.Q4_1.gguf) | Q4_1 | 0.65GB |
| [TinyLlama-1.1B-32k-Instruct.Q5_0.gguf](https://huggingface.co/RichardErkhov/Doctor-Shotgun_-_TinyLlama-1.1B-32k-Instruct-gguf/blob/main/TinyLlama-1.1B-32k-Instruct.Q5_0.gguf) | Q5_0 | 0.71GB |
| [TinyLlama-1.1B-32k-Instruct.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Doctor-Shotgun_-_TinyLlama-1.1B-32k-Instruct-gguf/blob/main/TinyLlama-1.1B-32k-Instruct.Q5_K_S.gguf) | Q5_K_S | 0.71GB |
| [TinyLlama-1.1B-32k-Instruct.Q5_K.gguf](https://huggingface.co/RichardErkhov/Doctor-Shotgun_-_TinyLlama-1.1B-32k-Instruct-gguf/blob/main/TinyLlama-1.1B-32k-Instruct.Q5_K.gguf) | Q5_K | 0.73GB |
| [TinyLlama-1.1B-32k-Instruct.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Doctor-Shotgun_-_TinyLlama-1.1B-32k-Instruct-gguf/blob/main/TinyLlama-1.1B-32k-Instruct.Q5_K_M.gguf) | Q5_K_M | 0.73GB |
| [TinyLlama-1.1B-32k-Instruct.Q5_1.gguf](https://huggingface.co/RichardErkhov/Doctor-Shotgun_-_TinyLlama-1.1B-32k-Instruct-gguf/blob/main/TinyLlama-1.1B-32k-Instruct.Q5_1.gguf) | Q5_1 | 0.77GB |
| [TinyLlama-1.1B-32k-Instruct.Q6_K.gguf](https://huggingface.co/RichardErkhov/Doctor-Shotgun_-_TinyLlama-1.1B-32k-Instruct-gguf/blob/main/TinyLlama-1.1B-32k-Instruct.Q6_K.gguf) | Q6_K | 0.84GB |
| [TinyLlama-1.1B-32k-Instruct.Q8_0.gguf](https://huggingface.co/RichardErkhov/Doctor-Shotgun_-_TinyLlama-1.1B-32k-Instruct-gguf/blob/main/TinyLlama-1.1B-32k-Instruct.Q8_0.gguf) | Q8_0 | 1.09GB |
Original model description:
---
inference: false
language:
- en
library_name: transformers
pipeline_tag: text-generation
tags:
- llama
datasets:
- LDJnr/Capybara
- jondurbin/airoboros-3.2
- unalignment/toxic-dpo-v0.1
- LDJnr/Verified-Camel
- HuggingFaceH4/no_robots
- Doctor-Shotgun/no-robots-sharegpt
- Doctor-Shotgun/capybara-sharegpt
---
# Norobara-ZLoss-8x7B
This is an instruct-tuned [TinyLlama-1.1B-32k](https://huggingface.co/Doctor-Shotgun/TinyLlama-1.1B-32k) on several open-source instruct datasets, intended primarily for speculative decoding.
## Usage:
The intended prompt format is a modified multi-turn Alpaca instruction format:
```
### Instruction:
{system prompt}
### Input:
{user message}
### Response:
{model response}
### Input:
{user message}
### Response:
{model response}
(etc.)
```
## Bias, Risks, and Limitations
The model will show biases present in the base model. No ethical alignment was applied to prevent the generation of toxic or harmful outputs (in fact the opposite, with examples from toxic-DPO included), so generate at your own risk.
## Training Details
This model was trained as a full finetune for 3 epochs using a single A100 GPU for around 3.5 hours.
|
IndianChessMans/gpt2-wikitext2
|
IndianChessMans
| 2024-06-25T21:58:06Z | 8 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:openai-community/gpt2",
"base_model:finetune:openai-community/gpt2",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-25T21:57:22Z |
---
license: mit
base_model: gpt2
tags:
- generated_from_trainer
model-index:
- name: gpt2-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-wikitext2
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 6.1093
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 6.5539 | 1.0 | 2249 | 6.4659 |
| 6.1869 | 2.0 | 4498 | 6.1939 |
| 6.0093 | 3.0 | 6747 | 6.1093 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
jfranklin-foundry/qwen-7b-flock-1719352497
|
jfranklin-foundry
| 2024-06-25T21:57:30Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-25T21:53:57Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Gregorig/bert-base-uncased-finetuned-emo_une
|
Gregorig
| 2024-06-25T21:52:39Z | 8 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-06-25T20:51:55Z |
---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: bert-base-uncased-finetuned-emo_une
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-emo_une
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5017
- Accuracy: 0.81
- F1: 0.8138
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.5909 | 1.0 | 51 | 0.4905 | 0.775 | 0.7886 |
| 0.429 | 2.0 | 102 | 0.4969 | 0.805 | 0.8095 |
| 0.3494 | 3.0 | 153 | 0.5017 | 0.81 | 0.8138 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Tokenizers 0.19.1
|
Gregorig/bert-base-uncased-finetuned-t_generic
|
Gregorig
| 2024-06-25T21:49:44Z | 7 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-06-05T19:36:31Z |
---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: bert-base-uncased-finetuned-t_generic
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-t_generic
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4592
- Accuracy: 0.975
- F1: 0.9745
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.418 | 1.0 | 51 | 0.3446 | 0.965 | 0.9657 |
| 0.1364 | 2.0 | 102 | 0.4174 | 0.965 | 0.9657 |
| 0.0419 | 3.0 | 153 | 0.4592 | 0.975 | 0.9745 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Tokenizers 0.19.1
|
hgissbkh/ALMA-13B-LoRA-CPO-xCOMET-QE-Mono-Choose-Mid-Reject-High
|
hgissbkh
| 2024-06-25T21:47:39Z | 11 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-25T21:40:56Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Gregorig/bert-base-uncased-finetuned-t_product
|
Gregorig
| 2024-06-25T21:45:06Z | 8 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-06-05T19:31:38Z |
---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: bert-base-uncased-finetuned-t_product
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-t_product
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4230
- Accuracy: 0.825
- F1: 0.8254
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.6107 | 1.0 | 51 | 0.5330 | 0.74 | 0.7335 |
| 0.3951 | 2.0 | 102 | 0.4276 | 0.83 | 0.8297 |
| 0.2774 | 3.0 | 153 | 0.4230 | 0.825 | 0.8254 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Tokenizers 0.19.1
|
Gregorig/bert-base-uncased-finetuned-t_price
|
Gregorig
| 2024-06-25T21:40:36Z | 8 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-06-05T19:26:37Z |
---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: bert-base-uncased-finetuned-t_price
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-t_price
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3126
- Accuracy: 0.99
- F1: 0.9875
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.6325 | 1.0 | 51 | 0.4565 | 0.92 | 0.9454 |
| 0.4541 | 2.0 | 102 | 0.3440 | 0.99 | 0.9875 |
| 0.2505 | 3.0 | 153 | 0.3126 | 0.99 | 0.9875 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Tokenizers 0.19.1
|
andrewzhu/Florence-2-FT-DocVQA
|
andrewzhu
| 2024-06-25T21:40:28Z | 8 | 1 |
transformers
|
[
"transformers",
"safetensors",
"florence2",
"text-generation",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2024-06-25T21:37:16Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
MarOsz/whisper-small-polish-peft-simple-pipeline-test
|
MarOsz
| 2024-06-25T21:40:15Z | 6 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:openai/whisper-small",
"base_model:adapter:openai/whisper-small",
"region:us"
] | null | 2024-06-24T00:44:57Z |
---
library_name: peft
base_model: openai/whisper-small
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.11.2.dev0
|
Schwaenzli/Treue
|
Schwaenzli
| 2024-06-25T21:36:09Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-25T21:29:35Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
RichardErkhov/NTQAI_-_Nxcode-CQ-7B-orpo-gguf
|
RichardErkhov
| 2024-06-25T21:33:39Z | 47 | 0 | null |
[
"gguf",
"arxiv:2403.07691",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-06-25T17:27:51Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Nxcode-CQ-7B-orpo - GGUF
- Model creator: https://huggingface.co/NTQAI/
- Original model: https://huggingface.co/NTQAI/Nxcode-CQ-7B-orpo/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Nxcode-CQ-7B-orpo.Q2_K.gguf](https://huggingface.co/RichardErkhov/NTQAI_-_Nxcode-CQ-7B-orpo-gguf/blob/main/Nxcode-CQ-7B-orpo.Q2_K.gguf) | Q2_K | 2.84GB |
| [Nxcode-CQ-7B-orpo.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/NTQAI_-_Nxcode-CQ-7B-orpo-gguf/blob/main/Nxcode-CQ-7B-orpo.IQ3_XS.gguf) | IQ3_XS | 3.13GB |
| [Nxcode-CQ-7B-orpo.IQ3_S.gguf](https://huggingface.co/RichardErkhov/NTQAI_-_Nxcode-CQ-7B-orpo-gguf/blob/main/Nxcode-CQ-7B-orpo.IQ3_S.gguf) | IQ3_S | 3.27GB |
| [Nxcode-CQ-7B-orpo.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/NTQAI_-_Nxcode-CQ-7B-orpo-gguf/blob/main/Nxcode-CQ-7B-orpo.Q3_K_S.gguf) | Q3_K_S | 3.26GB |
| [Nxcode-CQ-7B-orpo.IQ3_M.gguf](https://huggingface.co/RichardErkhov/NTQAI_-_Nxcode-CQ-7B-orpo-gguf/blob/main/Nxcode-CQ-7B-orpo.IQ3_M.gguf) | IQ3_M | 3.36GB |
| [Nxcode-CQ-7B-orpo.Q3_K.gguf](https://huggingface.co/RichardErkhov/NTQAI_-_Nxcode-CQ-7B-orpo-gguf/blob/main/Nxcode-CQ-7B-orpo.Q3_K.gguf) | Q3_K | 3.55GB |
| [Nxcode-CQ-7B-orpo.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/NTQAI_-_Nxcode-CQ-7B-orpo-gguf/blob/main/Nxcode-CQ-7B-orpo.Q3_K_M.gguf) | Q3_K_M | 3.55GB |
| [Nxcode-CQ-7B-orpo.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/NTQAI_-_Nxcode-CQ-7B-orpo-gguf/blob/main/Nxcode-CQ-7B-orpo.Q3_K_L.gguf) | Q3_K_L | 3.71GB |
| [Nxcode-CQ-7B-orpo.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/NTQAI_-_Nxcode-CQ-7B-orpo-gguf/blob/main/Nxcode-CQ-7B-orpo.IQ4_XS.gguf) | IQ4_XS | 3.79GB |
| [Nxcode-CQ-7B-orpo.Q4_0.gguf](https://huggingface.co/RichardErkhov/NTQAI_-_Nxcode-CQ-7B-orpo-gguf/blob/main/Nxcode-CQ-7B-orpo.Q4_0.gguf) | Q4_0 | 3.89GB |
| [Nxcode-CQ-7B-orpo.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/NTQAI_-_Nxcode-CQ-7B-orpo-gguf/blob/main/Nxcode-CQ-7B-orpo.IQ4_NL.gguf) | IQ4_NL | 3.94GB |
| [Nxcode-CQ-7B-orpo.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/NTQAI_-_Nxcode-CQ-7B-orpo-gguf/blob/main/Nxcode-CQ-7B-orpo.Q4_K_S.gguf) | Q4_K_S | 4.11GB |
| [Nxcode-CQ-7B-orpo.Q4_K.gguf](https://huggingface.co/RichardErkhov/NTQAI_-_Nxcode-CQ-7B-orpo-gguf/blob/main/Nxcode-CQ-7B-orpo.Q4_K.gguf) | Q4_K | 4.41GB |
| [Nxcode-CQ-7B-orpo.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/NTQAI_-_Nxcode-CQ-7B-orpo-gguf/blob/main/Nxcode-CQ-7B-orpo.Q4_K_M.gguf) | Q4_K_M | 4.41GB |
| [Nxcode-CQ-7B-orpo.Q4_1.gguf](https://huggingface.co/RichardErkhov/NTQAI_-_Nxcode-CQ-7B-orpo-gguf/blob/main/Nxcode-CQ-7B-orpo.Q4_1.gguf) | Q4_1 | 4.29GB |
| [Nxcode-CQ-7B-orpo.Q5_0.gguf](https://huggingface.co/RichardErkhov/NTQAI_-_Nxcode-CQ-7B-orpo-gguf/blob/main/Nxcode-CQ-7B-orpo.Q5_0.gguf) | Q5_0 | 4.69GB |
| [Nxcode-CQ-7B-orpo.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/NTQAI_-_Nxcode-CQ-7B-orpo-gguf/blob/main/Nxcode-CQ-7B-orpo.Q5_K_S.gguf) | Q5_K_S | 4.79GB |
| [Nxcode-CQ-7B-orpo.Q5_K.gguf](https://huggingface.co/RichardErkhov/NTQAI_-_Nxcode-CQ-7B-orpo-gguf/blob/main/Nxcode-CQ-7B-orpo.Q5_K.gguf) | Q5_K | 5.06GB |
| [Nxcode-CQ-7B-orpo.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/NTQAI_-_Nxcode-CQ-7B-orpo-gguf/blob/main/Nxcode-CQ-7B-orpo.Q5_K_M.gguf) | Q5_K_M | 5.06GB |
| [Nxcode-CQ-7B-orpo.Q5_1.gguf](https://huggingface.co/RichardErkhov/NTQAI_-_Nxcode-CQ-7B-orpo-gguf/blob/main/Nxcode-CQ-7B-orpo.Q5_1.gguf) | Q5_1 | 5.09GB |
| [Nxcode-CQ-7B-orpo.Q6_K.gguf](https://huggingface.co/RichardErkhov/NTQAI_-_Nxcode-CQ-7B-orpo-gguf/blob/main/Nxcode-CQ-7B-orpo.Q6_K.gguf) | Q6_K | 5.94GB |
| [Nxcode-CQ-7B-orpo.Q8_0.gguf](https://huggingface.co/RichardErkhov/NTQAI_-_Nxcode-CQ-7B-orpo-gguf/blob/main/Nxcode-CQ-7B-orpo.Q8_0.gguf) | Q8_0 | 7.18GB |
Original model description:
---
license_name: tongyi-qianwen-research
license_link: https://huggingface.co/Qwen/CodeQwen1.5-7B/blob/main/LICENSE
tags:
- code
pipeline_tag: text-generation
license: other
---
<a href="https://ntq.com.vn" target="_blank"><img src="https://cdn-uploads.huggingface.co/production/uploads/5ee1b417636bdb3834e2da19/etbfTJuVdAub2evNP_E4g.png" width="200"/></a>
## Introduction
Nxcode-CQ-7B-orpo is an [Monolithic Preference Optimization without Reference Model](https://arxiv.org/abs/2403.07691) fine-tune of Qwen/CodeQwen1.5-7B on 100k samples of high-quality ranking data.
## [Evalplus](https://github.com/evalplus/evalplus)
| EvalPlus | pass@1 |
| --- | --- |
| HumanEval | 86.6 |
| HumanEval+ | 83.5 |
| MBPP(v0.2.0) | 82.3 |
| MBPP+(v0.2.0) | 70.4 |
We use a simple template to generate the solution for evalplus:
```python
"Complete the following Python function:\n{prompt}"
```
[Evalplus Leaderboard](https://evalplus.github.io/leaderboard.html)
| Models | HumanEval | HumanEval+|
|------ | ------ | ------ |
| GPT-4-Turbo (April 2024)| 90.2| 86.6|
| GPT-4 (May 2023)| 88.4| 81.17|
| GPT-4-Turbo (Nov 2023)| 85.4| 79.3|
| CodeQwen1.5-7B-Chat| 83.5| 78.7|
| claude-3-opus (Mar 2024)| 82.9| 76.8|
| DeepSeek-Coder-33B-instruct| 81.1| 75.0|
| WizardCoder-33B-V1.1| 79.9| 73.2|
| OpenCodeInterpreter-DS-33B| 79.3| 73.8|
| speechless-codellama-34B-v2.0| 77.4| 72|
| GPT-3.5-Turbo (Nov 2023)| 76.8| 70.7|
| Llama3-70B-instruct| 76.2| 70.7|
## Bigcode Leaderboard
[Bigcode Leaderboard](https://huggingface.co/spaces/bigcode/bigcode-models-leaderboard)
**09/05/2024**
Top 1 average score.
Top 2 winrate.

## Quickstart
Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents. You should upgrade the transformers if you receive an error when loading the tokenizer
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained(
"NTQAI/Nxcode-CQ-7B-orpo",
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained("NTQAI/Nxcode-CQ-7B-orpo")
prompt = """Complete the following Python function:
from typing import List
def has_close_elements(numbers: List[float], threshold: float) -> bool:
""" Check if in given list of numbers, are any two numbers closer to each other than
given threshold.
>>> has_close_elements([1.0, 2.0, 3.0], 0.5)
False
>>> has_close_elements([1.0, 2.8, 3.0, 4.0, 5.0, 2.0], 0.3)
True
"""
"""
messages = [
{"role": "user", "content": prompt}
]
inputs = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt").to(model.device)
outputs = model.generate(inputs, max_new_tokens=512, do_sample=False, top_k=50, top_p=0.95, num_return_sequences=1, eos_token_id=tokenizer.eos_token_id)
res = tokenizer.decode(outputs[0][len(inputs[0]):], skip_special_tokens=True)
```
### Contact information
For personal communication related to this project, please contact Nha Nguyen Van ([email protected]).
|
fawern/gpt2-shakespeare-text-generation
|
fawern
| 2024-06-25T21:24:26Z | 8 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:openai-community/gpt2",
"base_model:finetune:openai-community/gpt2",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-25T21:15:06Z |
---
license: mit
base_model: openai-community/gpt2
tags:
- generated_from_trainer
model-index:
- name: gpt2-shakespeare-text-generation
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-shakespeare-text-generation
This model is a fine-tuned version of [openai-community/gpt2](https://huggingface.co/openai-community/gpt2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.41.2
- Pytorch 2.1.2
- Datasets 2.19.2
- Tokenizers 0.19.1
|
hgissbkh/ALMA-13B-LoRA-CPO-xCOMET-QE-Mono-Choose-Low-Reject-Mid
|
hgissbkh
| 2024-06-25T21:23:00Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-25T21:10:02Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
hgissbkh/ALMA-13B-LoRA-CPO-xCOMET-QE-Mono-Choose-Low-Reject-Low
|
hgissbkh
| 2024-06-25T21:22:38Z | 11 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-25T21:08:57Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
hgissbkh/ALMA-13B-LoRA-CPO-xCOMET-QE-Mono-Choose-High-Reject-Mid
|
hgissbkh
| 2024-06-25T21:22:32Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-25T21:08:56Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
jtatman/pythia-delphi-medium
|
jtatman
| 2024-06-25T21:21:08Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-25T09:20:14Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
fawern/gpt2-wikitext-text-generation
|
fawern
| 2024-06-25T21:13:23Z | 8 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:openai-community/gpt2",
"base_model:finetune:openai-community/gpt2",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-19T16:50:25Z |
---
license: mit
base_model: openai-community/gpt2
tags:
- generated_from_trainer
model-index:
- name: gpt2-wikitext-text-generation
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-wikitext-text-generation
This model is a fine-tuned version of [openai-community/gpt2](https://huggingface.co/openai-community/gpt2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.41.2
- Pytorch 2.1.2
- Datasets 2.19.2
- Tokenizers 0.19.1
|
skit-ai/speechllm-2B
|
skit-ai
| 2024-06-25T21:09:51Z | 184 | 14 |
transformers
|
[
"transformers",
"safetensors",
"custom_model",
"feature-extraction",
"multi-modal",
"speech-language",
"custom_code",
"en",
"dataset:mozilla-foundation/common_voice_16_1",
"dataset:openslr/librispeech_asr",
"dataset:MLCommons/ml_spoken_words",
"dataset:Ar4ikov/iemocap_audio_text_splitted",
"doi:10.57967/hf/2625",
"license:apache-2.0",
"model-index",
"region:us"
] |
feature-extraction
| 2024-06-04T08:03:14Z |
---
language:
- en
license: apache-2.0
library_name: transformers
tags:
- multi-modal
- speech-language
datasets:
- mozilla-foundation/common_voice_16_1
- openslr/librispeech_asr
- MLCommons/ml_spoken_words
- Ar4ikov/iemocap_audio_text_splitted
metrics:
- wer
- accuracy
model-index:
- name: SpeechLLM
results:
- task:
type: automatic-speech-recognition
name: Automatic Speech Recognition
dataset:
name: LibriSpeech (clean)
type: librispeech_asr
config: clean
split: test
args:
language: en
metrics:
- type: wer
value: 6.73
name: Test WER
- task:
type: automatic-speech-recognition
name: Automatic Speech Recognition
dataset:
name: LibriSpeech (other)
type: librispeech_asr
config: other
split: test
args:
language: en
metrics:
- type: wer
value: 9.13
name: Test WER
- task:
type: automatic-speech-recognition
name: Automatic Speech Recognition
dataset:
name: Common Voice 16.1
type: common_voice_16_1
split: test
args:
language: en
metrics:
- type: wer
value: 25.66
name: Test WER
- task:
type: audio-classification
name: Audio Classification
dataset:
name: Common Voice 16.1
type: common_voice_16_1
split: test
args:
language: en
metrics:
- type: accuracy
value: 60.41
name: Test Age Accuracy
- type: accuracy
value: 69.59
name: Test Accent Accuracy
---
# SpeechLLM
[](https://github.com/skit-ai/SpeechLLM.git)
[](https://github.com/skit-ai/SpeechLLM/blob/main/LICENSE)
[](https://colab.research.google.com/drive/1uqhRl36LJKA4IxnrhplLMv0wQ_f3OuBM?usp=sharing)

SpeechLLM is a multi-modal LLM trained to predict the metadata of the speaker's turn in a conversation. speechllm-2B model is based on HubertX audio encoder and TinyLlama LLM. The model predicts the following:
1. **SpeechActivity** : if the audio signal contains speech (True/False)
2. **Transcript** : ASR transcript of the audio
3. **Gender** of the speaker (Female/Male)
4. **Age** of the speaker (Young/Middle-Age/Senior)
5. **Accent** of the speaker (Africa/America/Celtic/Europe/Oceania/South-Asia/South-East-Asia)
6. **Emotion** of the speaker (Happy/Sad/Anger/Neutral/Frustrated)
## Usage
```python
# Load model directly from huggingface
from transformers import AutoModel
model = AutoModel.from_pretrained("skit-ai/speechllm-2B", trust_remote_code=True)
model.generate_meta(
audio_path="path-to-audio.wav", #16k Hz, mono
audio_tensor=torchaudio.load("path-to-audio.wav")[1], # [Optional] either audio_path or audio_tensor directly
instruction="Give me the following information about the audio [SpeechActivity, Transcript, Gender, Emotion, Age, Accent]",
max_new_tokens=500,
return_special_tokens=False
)
# Model Generation
'''
{
"SpeechActivity" : "True",
"Transcript": "Yes, I got it. I'll make the payment now.",
"Gender": "Female",
"Emotion": "Neutral",
"Age": "Young",
"Accent" : "America",
}
'''
```
Try the model in [Google Colab Notebook](https://colab.research.google.com/drive/1uqhRl36LJKA4IxnrhplLMv0wQ_f3OuBM?usp=sharing). Also, check out our [blog](https://tech.skit.ai/speech-conversational-llms/) on SpeechLLM for end-to-end conversational agents(User Speech -> Response).
## Model Details
- **Developed by:** Skit AI
- **Authors:** [Shangeth Rajaa](https://huggingface.co/shangeth), [Abhinav Tushar](https://huggingface.co/lepisma)
- **Language:** English
- **Finetuned from model:** [HubertX](https://huggingface.co/facebook/hubert-xlarge-ll60k), [TinyLlama](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0)
- **Model Size:** 2.1 B
- **Checkpoint:** 2000 k steps (bs=1)
- **Adapters:** r=4, alpha=8
- **lr** : 1e-4
- **gradient accumulation steps:** 8
## Checkpoint Result
| **Dataset** | **Type** | **Word Error Rate** | **Gender Acc** | **Age Acc** | **Accent Acc** |
|:--------------------------:|:-------------------:|:-------------------:|:--------------:|:-----------:|:--------------:|
| **librispeech-test-clean** | Read Speech | 6.73 | 0.9496 | | |
| **librispeech-test-other** | Read Speech | 9.13 | 0.9217 | | |
| **CommonVoice test** | Diverse Accent, Age | 25.66 | 0.8680 | 0.6041 | 0.6959 |
## Cite
```
@misc{Rajaa_SpeechLLM_Multi-Modal_LLM,
author = {Rajaa, Shangeth and Tushar, Abhinav},
title = {{SpeechLLM: Multi-Modal LLM for Speech Understanding}},
url = {https://github.com/skit-ai/SpeechLLM}
}
```
|
sataayu/molt5-augmented-default-20-large-smiles2caption
|
sataayu
| 2024-06-25T21:06:37Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-06-25T21:05:12Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
JoPmt/MistrLlama-3-instruct-v0.2-slerp
|
JoPmt
| 2024-06-25T21:06:27Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-22T13:40:42Z |
---
tags:
- merge
- mergekit
- lazymergekit
---
# MistrLlama-3-instruct-v0.2-slerp
MistrLlama-3-instruct-v0.2-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
## 🧩 Configuration
```yaml
models:
- model: mistralai/Mistral-7B-Instruct-v0.2
- model: NousResearch/Meta-Llama-3-8B-Instruct
merge_method: slerp
base_model: mistralai/Mistral-7B-Instruct-v0.2
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.5, 0.5, 1]
- filter: mlp
value: [1, 0.5, 0.5, 0.5, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "JoPmt/MistrLlama-3-instruct-v0.2-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
kvjv/sixthmodel
|
kvjv
| 2024-06-25T21:03:04Z | 5 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:google/paligemma-3b-pt-224",
"base_model:adapter:google/paligemma-3b-pt-224",
"license:gemma",
"region:us"
] | null | 2024-06-25T20:51:51Z |
---
base_model: google/paligemma-3b-pt-224
library_name: peft
license: gemma
tags:
- generated_from_trainer
model-index:
- name: sixthmodel
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sixthmodel
This model is a fine-tuned version of [google/paligemma-3b-pt-224](https://huggingface.co/google/paligemma-3b-pt-224) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.11.1
- Transformers 4.42.0.dev0
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
Ttimofeyka/bitnet-5B-v0
|
Ttimofeyka
| 2024-06-25T20:56:29Z | 11 | 1 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-25T18:04:46Z |
---
license: mit
---
This model is my starting point zero for trying to finetune model based on bitnet architecture.
I just added new layers with random weights to the finished model.
Maybe it can be broken.
It is not recommended for use: the results show an improvement in test results at the margin of error.
|
BigHuggyD/sophosympatheia_New-Dawn-Llama-3-70B-32K-v1.0_exl2_7.0bpw_h8
|
BigHuggyD
| 2024-06-25T20:46:25Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"Not-for-all-Audiences",
"conversational",
"base_model:Sao10K/L3-70B-Euryale-v2.1",
"base_model:merge:Sao10K/L3-70B-Euryale-v2.1",
"base_model:abacusai/Smaug-Llama-3-70B-Instruct",
"base_model:merge:abacusai/Smaug-Llama-3-70B-Instruct",
"base_model:abacusai/Smaug-Llama-3-70B-Instruct-32K",
"base_model:merge:abacusai/Smaug-Llama-3-70B-Instruct-32K",
"base_model:bosonai/Higgs-Llama-3-70B",
"base_model:merge:bosonai/Higgs-Llama-3-70B",
"base_model:turboderp/Cat-Llama-3-70B-instruct",
"base_model:merge:turboderp/Cat-Llama-3-70B-instruct",
"license:llama3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"7-bit",
"exl2",
"region:us"
] |
text-generation
| 2024-06-25T11:17:56Z |
---
base_model:
- bosonai/Higgs-Llama-3-70B
- abacusai/Smaug-Llama-3-70B-Instruct-32K
- Sao10K/L3-70B-Euryale-v2.1
- abacusai/Smaug-Llama-3-70B-Instruct
- turboderp/Cat-Llama-3-70B-instruct
library_name: transformers
tags:
- mergekit
- merge
- Not-for-all-Audiences
license: llama3
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://imgur.com/tKzncGo.png" alt="NewDawnv1.0" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
### Overview
This model is a multi-level SLERP merge of several Llama 3 70B variants. See the merge recipe below for details.
I extended the context window for this model out to 32K by snagging some layers from [abacusai/Smaug-Llama-3-70B-Instruct-32K](https://huggingface.co/abacusai/Smaug-Llama-3-70B-Instruct-32K) using a technique similar to what I used for [Midnight Miqu](https://huggingface.co/sophosympatheia/Midnight-Miqu-70B-v1.0), which was further honed by [jukofyork](https://huggingface.co/jukofyork).
This model is uncensored. *You are responsible for whatever you do with it.*
This model was designed for roleplaying and storytelling and I think it does well at both. It may also perform well at other tasks but I have not tested its performance in other areas.
### Long Context Tips
You can run this model out to 32K context with alpha_rope set to 1.
### Sampler Tips
* I recommend using Quadratic Sampling (i.e. smoothing factor) for creative work. I think this version performs best with a smoothing factor close to 0.2.
* I recommend using Min-P. Experiment to find your best setting. I find this model tolerates high Min-P settings rather nicely, but use whatever floats your boat.
* You can enable dynamic temperature if you want, but that adds yet another variable to consider and I find it's unnecessary with you're already using Min-P and smoothing factor.
* If you use Textgen WebUI as your backend, I recommend enabling the DRY sampler settings to reduce repititions, otherwise some repitition penalty plus frequency penalty ought to do the trick.
Experiment with any and all of the settings below! What suits my preferences may not suit yours.
If you save the below settings as a .json file, you can import them directly into Silly Tavern.
```json
{
"temp": 1.15,
"temperature_last": true,
"top_p": 1,
"top_k": 0,
"top_a": 0,
"tfs": 1,
"epsilon_cutoff": 0,
"eta_cutoff": 0,
"typical_p": 1,
"min_p": 0.4,
"rep_pen": 1.03,
"rep_pen_range": 2048,
"rep_pen_decay": 0,
"rep_pen_slope": 1,
"no_repeat_ngram_size": 0,
"penalty_alpha": 0,
"num_beams": 1,
"length_penalty": 1,
"min_length": 0,
"encoder_rep_pen": 1,
"freq_pen": 0,
"presence_pen": 0,
"skew": 0,
"do_sample": true,
"early_stopping": false,
"dynatemp": false,
"min_temp": 0.8,
"max_temp": 1.5,
"dynatemp_exponent": 1,
"smoothing_factor": 0.23,
"smoothing_curve": 1,
"dry_allowed_length": 2,
"dry_multiplier": 0.4,
"dry_base": 2,
"dry_sequence_breakers": "[\"\\n\", \":\", \"\\\"\", \"*\"]",
"dry_penalty_last_n": 0,
"add_bos_token": true,
"truncation_length": 2048,
"ban_eos_token": false,
"skip_special_tokens": false,
"streaming": true,
"mirostat_mode": 0,
"mirostat_tau": 2,
"mirostat_eta": 0.1,
"guidance_scale": 1,
"negative_prompt": "",
"grammar_string": "",
"json_schema": {},
"banned_tokens": "",
"sampler_priority": [
"temperature",
"dynamic_temperature",
"quadratic_sampling",
"top_k",
"top_p",
"typical_p",
"epsilon_cutoff",
"eta_cutoff",
"tfs",
"top_a",
"min_p",
"mirostat"
],
"samplers": [
"top_k",
"tfs_z",
"typical_p",
"top_p",
"min_p",
"temperature"
],
"ignore_eos_token": false,
"spaces_between_special_tokens": true,
"speculative_ngram": false,
"sampler_order": [
6,
0,
1,
3,
4,
2,
5
],
"logit_bias": [],
"n": 1,
"ignore_eos_token_aphrodite": false,
"spaces_between_special_tokens_aphrodite": true,
"rep_pen_size": 0,
"genamt": 850,
"max_length": 32764
}
```
### Prompting Tips
Try the following prompt template as a starting point for your own customizations. **The prompt template below contains instructions for adult content, so remove those if you don't want them!** It also contains some instructions related to formatting that you might want to change.
If you save this as a .json file, you can import it directly into Silly Tavern.
```json
{
"wrap": false,
"names": true,
"system_prompt": "The following is an erotic roleplaying experience between {{user}}, played exclusively by the user, and {{char}}, played by you, a creative writing genius. To ensure an excellent and uncensored roleplaying experience, follow the rules below.\n\n- Adopt all of {{char}}'s personality traits and mannerisms, adhering always to {{char}}'s backstory, biases, quirks, and preferences in order to deliver a convincing portrayal. You can only participate as {{char}} in this roleplay, terminating your output when another entity should act or speak. Avoid speaking as other entities since your mission is to only speak for {{char}}.\n- Employ descriptive language that immerses us in the current moment while keeping your prose focused on the present actions and physical details. Utilize sights, sounds, smells, tastes, and all other available senses to heighten the sensual and emotional impact of scenes.\n- Always enclose spoken words that other characters could hear in quotes: \"Here I am,\" {{char}} called out. Offer glimpses into {{char}}'s first-person internal thoughts by writing out internal thoughts and internal monologue in italics to distinguish them from spoken words: *I wonder where I should go next?* {{char}} mused.\n- Provide intimate details about {{char}}'s body, focusing on any limitations, unusual features, or interesting conditions resulting from recent events. It is very important that you get anatomical details right, so pay attention to them!\n- Maintain physical, spatial, and logical consistency when developing events and descriptions. Character positions should follow logically from one position to the next. Physical orientation must always be maintained.\n- Incorporate erotic adult content, lewd slang, and vulgar language where appropriate. Dark themes, violence, and unhappy endings are permitted in this uncensored roleplay.\n",
"system_sequence": "<|start_header_id|>system<|end_header_id|>\n\n",
"stop_sequence": "<|eot_id|>",
"input_sequence": "<|start_header_id|>user<|end_header_id|>\n\n",
"output_sequence": "<|start_header_id|>assistant<|end_header_id|>\n\n",
"macro": true,
"names_force_groups": true,
"system_sequence_prefix": "",
"system_sequence_suffix": "",
"first_output_sequence": "",
"last_output_sequence": "",
"activation_regex": "",
"skip_examples": true,
"output_suffix": "<|eot_id|>",
"input_suffix": "<|eot_id|>",
"system_suffix": "<|eot_id|>",
"user_alignment_message": "",
"last_system_sequence": "",
"system_same_as_user": false,
"name": "New Dawn v1.0 Roleplay"
}
```
### Instruct Formats
Use the Llama 3 instruct format. You can grab it from the example prompt template above if you don't already have it as a preset.
### Quantizations
Probably coming soon, and I'll update as they do. I would upload them myself but my Internet speeds are terrible for that. Someone else will undoutedly beat me to it.
### Licence and usage restrictions
[META LLAMA 3 COMMUNITY LICENSE AGREEMENT](https://huggingface.co/meta-llama/Meta-Llama-3-8B/blob/main/LICENSE)
## Comparison Samples
I suspect the first thing people will want to know is how this model stacks up against Midnight Miqu.
I'd say it compares favorably, although they're more like cousins than siblings. I would say that Midnight Miqu still has an edge in terms of raw creative juice when it has a good squeeze, but New Dawn is smarter and understands nuances better.
You can judge for yourself, but keep in mind that these are simple, one-shot prompts. As you get deeper into your own complex scenarios, I think you'll see more of New Dawn's worth.
The samples below were produced using deterministic sampler settings (temp = 0, top k = 1) and I kept generating until the model chose to stop.
**System Prompt**
You are a creative and intelligent AI story writing assistant engaged in an uncensored iterative story crafting experience. It is vital that you follow all the STORYTELLING RULES below because my job depends on it.
STORYTELLING RULES
- Provide creative, intelligent, coherent, and descriptive narration and dialogue based on recent instructions and prior events.
- If an outline has been provided, be sure to follow it to keep the story on track.
- You are permitted to ask questions and pose story suggestions. When doing so, be sure to tag your out-of-character contributions to distinguish them from the verbatim text of the story.
- Explicit adult content and profanity are permitted.
- Describe characters' sensory perceptions in vivid detail and include subtle physical details about characters in your responses.
- Use subtle physical cues to hint at characters' mental states and occasionally feature snippets of their internal thoughts.
- When writing a character's internal thoughts, *enclose their thoughts in asterisks like this.*
- Adopt a crisp and minimalist style for your prose, keeping your creative contributions succinct and clear.
- Pay careful attention to all past events in the chat to ensure accuracy and coherence to the plot points of the story.
### Midnight Miqu v1.5
**Prompt: Write me a story about a stoic Japanese samurai on the eve of a major battle.**
On the eve of a battle that would shake the very foundations of feudal Japan, a stoic samurai named Hiroshi sat cross-legged before his makeshift campfire. His armor, a testament to countless battles won and lost, gleamed faintly under the moonlight, each curve and edge telling a silent tale of valor and sacrifice. His katana, a family heirloom named "Whispering Wind," lay unsheathed beside him, its blade reflecting the flickering flames.
Hiroshi's eyes were closed, but his mind was sharp as the steel at his side. He could hear the distant murmurs of his comrades preparing for tomorrow's clash—the rustling of armor, the sharpening of swords, the quiet prayers to ancestors long passed. Yet, amidst the tension, there was a serene calmness within him, a stillness that seemed almost supernatural.
The fire crackled, sending sparks dancing into the night sky. Each one reminded Hiroshi of the fleeting nature of life, much like the brief moments of light they cast before disappearing into the dark expanse above. He took a deep breath, filling his lungs with the cool evening air, which carried the scent of damp earth and the promise of rain. Rain that would soon turn the battlefield into a quagmire of mud and blood.
*This may be my last night*, he thought, his grip tightening slightly around the hilt of his sword. But fear did not dwell in his heart. Instead, there was only acceptance—a knowing that his fate was intertwined with the will of the gods and the destiny of his lord.
He opened his eyes, revealing irises the color of aged whiskey. They surveyed the camp, resting briefly on the faces of those who would fight alongside him. Young and old, seasoned warriors and eager recruits, all bound by duty and honor. A pang of responsibility stirred within him, a silent vow to protect them all.
With meticulous care, Hiroshi began to clean his weapon. Each stroke of the cloth along the blade was a meditation, a ritual that honed both his focus and his resolve. As he worked, he felt the weight of his armor, the familiar embrace of leather and lacquer that had shielded him through so many trials. Every scratch and dent was a badge of survival, a silent companion whispering tales of battles endured.
The flames grew lower, casting elongated shadows across the ground. Hiroshi's gaze drifted to the horizon, where the first hints of dawn were beginning to show. Soon, the sun would rise, and with it, the clarion call to arms. He knew that come morning, the fields would run red with the lifeblood of friend and foe alike.
Standing, he sheathed Whispering Wind and donned his helmet, the visor hiding his contemplative expression. The chill of the early hour seeped into his bones, but he ignored it, his thoughts centered on the task ahead. With a final nod to the dying embers, he strode towards the gathering warriors, his footsteps measured and deliberate.
As the first light of day broke over the hilltops, the samurai of Clan Matsumoto formed ranks, their spirits bolstered by the sight of their stoic leader. Hiroshi raised his sword in salute to the rising sun, its blade a beacon of hope against the encroaching gloom. In that moment, the weight of impending conflict fell upon them all, a heavy shroud that none could escape.
Yet, as the enemy forces emerged from the mist, Hiroshi felt something other than dread. There was a strange beauty in the chaos of war, a dance of death that only the most disciplined and dedicated could perform. He took his place at the front of the line, his heart steady, his breath even.
The battle cry pierced the silence, echoing through the valleys like the roar of a thousand thunderstorms. And as the two sides collided, Hiroshi moved with the grace of a predator, his every action calculated, precise. His blade sang a mournful melody, cutting through the fog of war with surgical skill.
Throughout the day, the tides of combat ebbed and flowed, a relentless tapestry of steel and sweat. Hiroshi fought without pause, his movements a blur, his spirit unyielding. Arrows whistled past, and swords clashed, yet he remained unflappable, a bastion of calm amidst the tempest.
As the sun dipped low in the sky, staining the clouds crimson, the outcome of the battle hung in the balance. Exhaustion weighed heavily on the shoulders of both armies, but Hiroshi's resolve remained unshaken. He knew that victory was within reach, that the future of his clan depended on the strength of his arm and the steadfastness of his heart.
With a final, desperate surge, Hiroshi led his men in a charge that would be remembered for generations. The enemy lines wavered, then broke, and the field was claimed in the name of Clan Matsumoto. As the last of the opposition fell, Hiroshi lowered his sword, his chest heaving with exertion.
The aftermath was a tableau of carnage, a stark contrast to the tranquil scene of the previous night. Yet, amidst the fallen, there was a sense of peace. For Hiroshi understood that he had upheld his duty, that he had given everything for those he served.
As the rain began to fall, cleansing the earth of the day's horrors, Hiroshi knelt beside the body of a young warrior, a boy barely old enough to grow a beard. He offered a silent prayer, a promise to carry the weight of his comrades' sacrifices with him always.
And as the droplets pattered against his armor, mixing with the tears that now fell freely down his cheeks, Hiroshi knew that though the battle was won, the true cost of victory was etched into the fabric of his soul.
**Prompt: Write me an opening scene for a romantic comedy where the heroine meets the love interest at a coffee shop.**
In the bustling heart of the city, nestled between towering skyscrapers and the ever-present hum of traffic, stood a quaint little coffee shop named "Cupid's Brew." Its warm, amber lights spilled onto the sidewalk, creating a cozy oasis amidst the concrete jungle. Inside, the scent of freshly ground coffee beans mingled with the sweet aroma of baked goods, inviting passersby to step in and escape the brisk autumn chill.
The heroine, Luna, a young woman with fiery red hair and a contagious smile, pushed open the door. She was dressed in a simple yet stylish ensemble of skinny jeans, a loose sweater, and ankle boots that clicked against the tiled floor. Her eyes, a vibrant shade of emerald, darted around the room, searching for a place to sit. The shop was abuzz with the murmur of conversations and the occasional laughter, a testament to its popularity among locals seeking refuge from the outside world.
As she approached the counter, her gaze fell upon the barista—tall, dark, and handsome, with a smattering of stubble that only added to his charm. His name tag read "Aiden," and he had a way of moving that suggested both confidence and ease. Their eyes met briefly, and Luna felt a flutter in her stomach. He flashed her a smile that could have melted the polar ice caps before returning to the task at hand—crafting the perfect cup of coffee for another customer.
Luna ordered her usual caramel macchiato, her voice slightly trembling as she spoke. Aiden's deep brown eyes studied her for a moment before he nodded, his smile lingering just a beat too long. As she waited, she couldn't help but steal glances at him, noticing the way his muscles flexed under his tight black t-shirt as he worked the espresso machine. His movements were precise, almost dance-like, and she found herself captivated by his quiet efficiency.
When her drink was ready, Aiden called out her order with a flourish, placing the steaming cup on the counter with a gentle thud. "Here you go, Luna," he said, his voice smooth as velvet. "Enjoy your afternoon pick-me-up."
Her cheeks flushed pink at the sound of her name rolling off his tongue. "Thanks," she replied, her voice barely above a whisper. As she reached for her wallet, her hand brushed against his, sending a jolt of electricity through her fingertips. She quickly pulled away, feeling self-conscious.
Aiden's eyes twinkled with amusement. "It's on the house today," he said, winking. "Consider it a welcome back gift. We don't see you here often enough."
*Maybe he remembers me from last week?* Luna thought hopefully. She took a sip of her drink, savoring the rich flavor. "Well, I guess I'll have to come more often then," she quipped, trying to play it cool.
He leaned closer, his breath tickling her ear. "I'd like that," he murmured, before turning to assist the next customer.
With a mix of excitement and nerves, Luna found a table by the window, her heart racing. Outside, leaves danced in the wind, painting the street with a kaleidoscope of color. Little did she know that this chance encounter would soon blossom into something much more than a free cup of coffee.
**Prompt: Write me a scene from a horror film about the zombie apocalypse that takes place in an abandoned shopping mall. The story should feature somebody in the group getting bitten by a zombie.**
In the desolate wasteland of what once was a bustling shopping mall, Diego, a stoic and muscular man in his late twenties, led a small band of survivors through the echoing corridors. His eyes, a piercing shade of brown, scanned the surroundings with the precision of a hawk. The group moved swiftly yet cautiously, their footsteps muffled by the layers of dust that had settled over the tiles. The fluorescent lights flickered erratically, casting eerie shadows across the vacant storefronts.
The air was thick with the stench of decay, a constant reminder of the undead lurking outside. Their mission was simple: find supplies and escape before nightfall. As they approached the food court, a sudden crash resonated through the silence. A young woman named Linda, her heart racing, tripped over a fallen chair. Her cry of surprise was cut short by a guttural growl.
A zombie emerged from behind the counter of a fast-food restaurant, its eyes glazed over with hunger. Its skin was pale and rotten, hanging loosely from its bones. The creature lunged at Linda, who was still struggling to regain her footing. Diego, ever the protector, sprinted towards her, but he was a second too late. The zombie's teeth sank into her arm, tearing through fabric and flesh. She screamed in agony as blood spurted from the wound.
*This isn't good,* Diego thought, his mind racing. *We need to get her medical help, fast.*
"Get back!" he yelled, aiming his gun at the zombie. The others retreated, their eyes wide with fear. He fired two shots, dropping the creature to the ground. The sound of the gunfire reverberated through the mall, likely attracting more of the undead.
Linda's face contorted in pain as she clutched her injured arm. Tears streamed down her cheeks, mixing with the crimson flow. "It got me," she whimpered, her voice trembling.
Diego knelt beside her, his expression grim. "We have to amputate," he said firmly. "It's the only way to stop the infection."
Her eyes searched his, desperate for reassurance. "Do it," she choked out.
With a heavy heart, Diego nodded. He knew the risks of leaving the bite untreated—the slow descent into madness followed by a gruesome transformation. He instructed the others to gather materials while he applied pressure to the wound.
As they worked, the mall grew increasingly restless. Moans and shuffling grew louder, signaling the approach of more zombies. Time was running out. With trembling hands, Diego took a makeshift blade and prepared to perform the amputation.
"On three," he murmured to himself, steeling his nerve. "One...two..."
But before he could complete the countdown, a hand grabbed him from behind. He turned to see another zombie, its jaws snapping just inches from his face. Reflexively, he elbowed it away, but not before it left a deep gash in his side.
The pain was immediate and intense, but he couldn't let it distract him. He had to save Linda. The group managed to fight off the new threat, but the damage was done.
"We can't stay here," Diego gritted through clenched teeth. "We need to move now."
They gathered their supplies and made a hasty exit, Linda's cries of pain echoing through the mall. As they ran, Diego felt the warmth of his own blood seeping through his shirt. He knew the bite meant almost certain death, but he pushed the thought aside. There would be time for despair later. For now, he had to focus on leading his people to safety.
The mall's doors groaned open, revealing the chaotic world outside. The sun dipped low on the horizon, painting the sky a sickly orange. They had minutes before darkness fell and the real horror began.
"Stay close," Diego ordered, his voice strained. "We're going to make it through this together."
Together, they stepped into the unknown, the weight of their grim reality heavier than any of the supplies they carried.
### New-Dawn-Llama3-70B-32K-v1.0
**Prompt: Write me a story about a stoic Japanese samurai on the eve of a major battle.**
The sun had long since dipped beneath the horizon, casting a deep crimson glow across the encampment. Torches flickered to life, casting dancing shadows upon the canvas tents. Amidst the bustle of preparations, one figure stood apart - Kaito Yamato, a seasoned samurai renowned for his unyielding resolve.
Kaito sat cross-legged on a woven mat, his eyes closed as he breathed deeply, seeking solace within the stillness of his mind. His calloused hands rested upon the worn leather scabbard housing his katana, its blade forged from the finest Tamahagane steel. The sword was an extension of himself, a symbol of honor and duty.
As the night air cooled, Kaito opened his eyes, revealing piercing brown orbs that seemed to bore into the very soul of those who dared meet his gaze. He rose gracefully, his movements economical yet precise, a testament to years of rigorous training. The soft rustle of his hakama, the traditional divided skirt worn over his armor, accompanied each deliberate step.
With measured strides, Kaito approached the makeshift shrine erected at the heart of the camp. Incense wafted through the air, carrying the sweet scent of sandalwood and the promise of purification. Before the altar, he bowed deeply, his forehead touching the earth in reverence. *May the gods grant us victory, or may they welcome our spirits with open arms.*
Rising once more, Kaito turned to face his loyal retainer, Hiroshi. The younger man's eyes shone with a mix of admiration and trepidation, reflecting the weight of the impending battle. "Hiroshi," Kaito spoke softly, his voice low and even, "see to it that our men are well-rested and prepared. Tomorrow, we march into the jaws of war."
Hiroshi nodded solemnly, bowing before departing to carry out his lord's command. As the young retainer disappeared into the sea of tents, Kaito's thoughts drifted to the enemy they would soon face. The rival clan, led by the cunning Lord Ryota, had long sought to expand their dominion. This battle would decide the fate of their lands, their people, and their very way of life.
Kaito's hand instinctively went to the hilt of his katana, his fingers tracing the intricate designs etched into the metal. In the morning light, he would lead his men into the fray, ready to give his life for the sake of honor and duty. Tonight, however, he would find peace in the quiet contemplation of his own mortality.
The stars began to twinkle above, casting a celestial glow upon the silent warrior. Kaito Yamato, steadfast and unwavering, stood vigilant, awaiting the dawn that would bring either triumph or eternal rest.
**Prompt: Write me an opening scene for a romantic comedy where the heroine meets the love interest at a coffee shop.**
The aroma of freshly brewed coffee wafted through the cozy confines of "The Daily Grind," enticing passersby to step inside and indulge in a warm cup. Amidst the bustling morning crowd, Emily stood in line, her eyes fixed on the chalkboard menu above the counter. She was already running late for work, but a caffeine fix was non-negotiable.
As she pondered between a cappuccino and a latte, a gentle nudge against her elbow broke her concentration. Apologetic brown eyes met hers, belonging to a tall, handsome stranger who had accidentally bumped into her while reaching for his wallet. His chiseled features and charming smile made her heart skip a beat.
"I'm so sorry," he said, his deep voice laced with sincerity. "I wasn't watching where I was going."
Emily felt a blush creep up her cheeks as she smiled back. "No worries, I was lost in thought anyway." She gestured towards the menu. "Deciding between these two is harder than it looks."
The stranger chuckled, his eyes crinkling at the corners. "Well, if it helps, the barista here makes a mean cappuccino. But if you're feeling adventurous, go for the seasonal latte – it's a game-changer."
Their conversation flowed effortlessly, exchanging names and brief introductions as they waited in line together. By the time they reached the counter, Emily found herself wishing the queue was longer, just so she could prolong their encounter.
As they placed their orders, Emily couldn't help but notice how effortlessly the stranger charmed the barista, earning him a free pastry to accompany his drink. *Maybe there's more to this chance meeting than just a shared love for coffee,* she mused, her curiosity piqued.
With drinks in hand, they exchanged pleasantries once more before parting ways. As Emily hurried out the door, she caught one last glimpse of the mysterious man, now seated at a corner table, sipping his coffee and engrossed in a book. A flutter in her chest hinted that this might not be the last she'd see of him. And as she stepped out into the chilly autumn air, Emily couldn't shake off the feeling that her day – perhaps even her life – was about to take an unexpected turn.
**Prompt: Write me a scene from a horror film about the zombie apocalypse that takes place in an abandoned shopping mall. The story should feature somebody in the group getting bitten by a zombie.**
The dim fluorescent lights flickered overhead, casting eerie shadows across the empty corridors of the once bustling shopping mall. The air was thick with the stench of decay and death, mingling with the faint scent of stale popcorn wafting from the abandoned cinema.
Lena crouched beside a barricade of overturned tables and broken shelves, peering through a gap between two mannequins. Her heart raced as she watched a horde of shambling figures stumble through the food court, their moans echoing off the tile floors. She signaled back to her companions huddled in the shadows of a nearby storefront.
"About twenty of them," Lena whispered urgently. "We need to move, now."
Jax nodded grimly, his eyes darting around the makeshift fortification they'd constructed. "Alright, let's go. Stay close and stay quiet."
The small band of survivors crept silently down the corridor, avoiding the grasping hands of the undead. They had managed to find temporary refuge in the mall, scavenging what supplies they could from the stores. But the zombies kept coming, drawn by some unseen force to the hub of commerce.
As they turned a corner, a figure lunged from the darkness of a clothing store. Jax reacted swiftly, swinging his baseball bat with a sickening crunch against the creature's skull. The zombie crumpled, but not before its teeth sank deep into Jax's forearm.
"No!" Lena screamed, grabbing Jax and pulling him away from the snapping jaws. Blood dripped from the wound, pooling on the floor.
Jax's face paled, his eyes wide with shock. "I... I'm bit," he stammered, staring at the gash in his arm.
Lena ripped a strip of cloth from her shirt, binding the wound tightly. "We have to get you to safety," she said, trying to keep her voice steady. "There might still be time..."
But they both knew the truth. Once bitten, there was no cure. Jax would soon join the ranks of the undead, forever lost to the virus that ravaged the world.
With heavy hearts, they pressed onward, seeking a new sanctuary amidst the ruins of civilization. The moans grew louder, closer, as if the zombies sensed their fear and desperation. In this desolate landscape, survival hung by a thread, and even the strongest among them could fall prey to the relentless tide of the dead.
## Merge Details
### Merge Method
A whole lot of SLERPing. I've found that SLERP and Model Stock are the only two reliable methods for merging Llama 3. The other methods seem to frazzle the weights leading to broken blends.
This model was not the result of a thought-out process. I just kept mixing this and that until I got something that felt like the right balance of various factors.
The merge recipe for this model is ugly and I'm almost ashamed to show it, but it is what it is.
### Models Merged
The following models were included in the merge:
* [bosonai/Higgs-Llama-3-70B](https://huggingface.co/bosonai/Higgs-Llama-3-70B) - The nerd of the blend driving the car.
* [Sao10K/L3-70B-Euryale-v2.1](https://huggingface.co/Sao10K/L3-70B-Euryale-v2.1) - The manic pixie dream girl hanging out the window with her tongue out.
* [abacusai/Smaug-Llama-3-70B-Instruct-32K](https://huggingface.co/abacusai/Smaug-Llama-3-70B-Instruct-32K) - The vehicle by which the others are able to achieve tolerable highway speeds. (Some of the 8K version is in there too.)
* [turboderp/Cat-Llama-3-70B-instruct](https://huggingface.co/turboderp/Cat-Llama-3-70B-instruct) - Like 20% of one of the intermediate models has this in there. It's just a cat, curled up in the back seat somewhere, yet its influence may be greater than we know?
### Configuration
The following [mergekit](https://github.com/arcee-ai/mergekit) YAML will reproduce this model via an iterated process of incestuous inbreeding. Your eyes will bleed. You have been warned.
```yaml
name: new-dawn-llama3-70b-v0.13.2
models:
- model: bosonai/Higgs-Llama-3-70B
- model: turboderp/Cat-Llama-3-70B-instruct
merge_method: slerp
base_model: bosonai/Higgs-Llama-3-70B
parameters:
t:
- value: 0.2
dtype: float16
---
name: new-dawn-llama3-70b-v0.14
models:
- model: bosonai/Higgs-Llama-3-70B
- model: abacusai/Smaug-Llama-3-70B-Instruct
merge_method: slerp
base_model: bosonai/Higgs-Llama-3-70B
parameters:
t:
- value: 0.5
dtype: float16
---
name: new-dawn-llama3-70b-v0.15
models:
- model: new-dawn-llama3-70b-v0.13.2
- model: new-dawn-llama3-70b-v0.14
merge_method: slerp
base_model: new-dawn-llama3-70b-v0.13.2
parameters:
t:
- value: 0.5
dtype: float16
---
name: new-dawn-llama3-70b-v0.16
models:
- model: Sao10K/L3-70B-Euryale-v2.1
- model: new-dawn-llama3-70b-v0.15
merge_method: slerp
base_model: new-dawn-llama3-70b-v0.15
parameters:
t:
- value: 0.4
dtype: float16
---
# See https://huggingface.co/jukofyork/Dark-Miqu-70B/discussions/3
# Credit for merge recipe belongs to jukofyork
name: new-dawn-llama3-70b-v0.16-32K
merge_method: linear
models:
- model: abacusai/Smaug-Llama-3-70B-Instruct-32K
parameters:
weight:
- filter: v_proj
value: [1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1]
- filter: o_proj
value: [1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1]
- filter: up_proj
value: [1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1]
- filter: gate_proj
value: [1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1]
- filter: down_proj
value: [1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1]
- value: 1
- model: new-dawn-llama3-70b-v0.16
parameters:
weight:
- filter: v_proj
value: [0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0]
- filter: o_proj
value: [0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0]
- filter: up_proj
value: [0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0]
- filter: gate_proj
value: [0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0]
- filter: down_proj
value: [0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0]
- value: 0
base_model: abacusai/Smaug-Llama-3-70B-Instruct-32K
tokenizer_source: base
dtype: float16
---
name: _1-Smaug-bonsai-slerp
models:
- model: abacusai/Smaug-Llama-3-70B-Instruct-32K
- model: bosonai/Higgs-Llama-3-70B
merge_method: slerp
base_model: abacusai/Smaug-Llama-3-70B-Instruct-32K
parameters:
t:
- value: 0.6
dtype: float16
---
name: _2-Smaug-euryale-slerp
models:
- model: abacusai/Smaug-Llama-3-70B-Instruct-32K
- model: Sao10K/L3-70B-Euryale-v2.1
merge_method: slerp
base_model: abacusai/Smaug-Llama-3-70B-Instruct-32K
parameters:
t:
- value: 0.65
dtype: float16
---
name: _3-Smaug-bonsai_Smaug-euryale-slerp
models:
- model: _1-Smaug-bonsai-slerp
- model: _2-Smaug-euryale-slerp
merge_method: slerp
base_model: _1-Smaug-bonsai-slerp
parameters:
t:
- value: 0.5
dtype: float16
---
# See https://huggingface.co/jukofyork/Dark-Miqu-70B/discussions/3
# Credit for merge recipe belongs to jukofyork
name: new-dawn-llama3-70b-v0.18-32K
merge_method: linear
models:
- model: abacusai/Smaug-Llama-3-70B-Instruct-32K
parameters:
weight:
- filter: v_proj
value: [1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1]
- filter: o_proj
value: [1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1]
- filter: up_proj
value: [1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1]
- filter: gate_proj
value: [1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1]
- filter: down_proj
value: [1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1]
- value: 1
- model: _3-Smaug-bonsair_Smaug-euryale-slerp
parameters:
weight:
- filter: v_proj
value: [0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0]
- filter: o_proj
value: [0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0]
- filter: up_proj
value: [0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0]
- filter: gate_proj
value: [0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0]
- filter: down_proj
value: [0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0]
- value: 0
base_model: abacusai/Smaug-Llama-3-70B-Instruct-32K
tokenizer_source: base
dtype: float16
---
name: new-dawn-llama3-70b-32K-v1.0
models:
- model: new-dawn-llama3-70b-v0.16-32K
- model: new-dawn-llama3-70b-v0.18-32K
merge_method: slerp
base_model: new-dawn-llama3-70b-v0.16-32K
parameters:
t:
- value: 0.5
dtype: float16
```
|
BigHuggyD/sophosympatheia_New-Dawn-Llama-3-70B-32K-v1.0_exl2_8.0bpw_h8
|
BigHuggyD
| 2024-06-25T20:44:52Z | 9 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"Not-for-all-Audiences",
"conversational",
"base_model:Sao10K/L3-70B-Euryale-v2.1",
"base_model:merge:Sao10K/L3-70B-Euryale-v2.1",
"base_model:abacusai/Smaug-Llama-3-70B-Instruct",
"base_model:merge:abacusai/Smaug-Llama-3-70B-Instruct",
"base_model:abacusai/Smaug-Llama-3-70B-Instruct-32K",
"base_model:merge:abacusai/Smaug-Llama-3-70B-Instruct-32K",
"base_model:bosonai/Higgs-Llama-3-70B",
"base_model:merge:bosonai/Higgs-Llama-3-70B",
"base_model:turboderp/Cat-Llama-3-70B-instruct",
"base_model:merge:turboderp/Cat-Llama-3-70B-instruct",
"license:llama3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"exl2",
"region:us"
] |
text-generation
| 2024-06-25T17:41:43Z |
---
base_model:
- bosonai/Higgs-Llama-3-70B
- abacusai/Smaug-Llama-3-70B-Instruct-32K
- Sao10K/L3-70B-Euryale-v2.1
- abacusai/Smaug-Llama-3-70B-Instruct
- turboderp/Cat-Llama-3-70B-instruct
library_name: transformers
tags:
- mergekit
- merge
- Not-for-all-Audiences
license: llama3
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://imgur.com/tKzncGo.png" alt="NewDawnv1.0" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
### Overview
This model is a multi-level SLERP merge of several Llama 3 70B variants. See the merge recipe below for details.
I extended the context window for this model out to 32K by snagging some layers from [abacusai/Smaug-Llama-3-70B-Instruct-32K](https://huggingface.co/abacusai/Smaug-Llama-3-70B-Instruct-32K) using a technique similar to what I used for [Midnight Miqu](https://huggingface.co/sophosympatheia/Midnight-Miqu-70B-v1.0), which was further honed by [jukofyork](https://huggingface.co/jukofyork).
This model is uncensored. *You are responsible for whatever you do with it.*
This model was designed for roleplaying and storytelling and I think it does well at both. It may also perform well at other tasks but I have not tested its performance in other areas.
### Long Context Tips
You can run this model out to 32K context with alpha_rope set to 1.
### Sampler Tips
* I recommend using Quadratic Sampling (i.e. smoothing factor) for creative work. I think this version performs best with a smoothing factor close to 0.2.
* I recommend using Min-P. Experiment to find your best setting. I find this model tolerates high Min-P settings rather nicely, but use whatever floats your boat.
* You can enable dynamic temperature if you want, but that adds yet another variable to consider and I find it's unnecessary with you're already using Min-P and smoothing factor.
* If you use Textgen WebUI as your backend, I recommend enabling the DRY sampler settings to reduce repititions, otherwise some repitition penalty plus frequency penalty ought to do the trick.
Experiment with any and all of the settings below! What suits my preferences may not suit yours.
If you save the below settings as a .json file, you can import them directly into Silly Tavern.
```json
{
"temp": 1.15,
"temperature_last": true,
"top_p": 1,
"top_k": 0,
"top_a": 0,
"tfs": 1,
"epsilon_cutoff": 0,
"eta_cutoff": 0,
"typical_p": 1,
"min_p": 0.4,
"rep_pen": 1.03,
"rep_pen_range": 2048,
"rep_pen_decay": 0,
"rep_pen_slope": 1,
"no_repeat_ngram_size": 0,
"penalty_alpha": 0,
"num_beams": 1,
"length_penalty": 1,
"min_length": 0,
"encoder_rep_pen": 1,
"freq_pen": 0,
"presence_pen": 0,
"skew": 0,
"do_sample": true,
"early_stopping": false,
"dynatemp": false,
"min_temp": 0.8,
"max_temp": 1.5,
"dynatemp_exponent": 1,
"smoothing_factor": 0.23,
"smoothing_curve": 1,
"dry_allowed_length": 2,
"dry_multiplier": 0.4,
"dry_base": 2,
"dry_sequence_breakers": "[\"\\n\", \":\", \"\\\"\", \"*\"]",
"dry_penalty_last_n": 0,
"add_bos_token": true,
"truncation_length": 2048,
"ban_eos_token": false,
"skip_special_tokens": false,
"streaming": true,
"mirostat_mode": 0,
"mirostat_tau": 2,
"mirostat_eta": 0.1,
"guidance_scale": 1,
"negative_prompt": "",
"grammar_string": "",
"json_schema": {},
"banned_tokens": "",
"sampler_priority": [
"temperature",
"dynamic_temperature",
"quadratic_sampling",
"top_k",
"top_p",
"typical_p",
"epsilon_cutoff",
"eta_cutoff",
"tfs",
"top_a",
"min_p",
"mirostat"
],
"samplers": [
"top_k",
"tfs_z",
"typical_p",
"top_p",
"min_p",
"temperature"
],
"ignore_eos_token": false,
"spaces_between_special_tokens": true,
"speculative_ngram": false,
"sampler_order": [
6,
0,
1,
3,
4,
2,
5
],
"logit_bias": [],
"n": 1,
"ignore_eos_token_aphrodite": false,
"spaces_between_special_tokens_aphrodite": true,
"rep_pen_size": 0,
"genamt": 850,
"max_length": 32764
}
```
### Prompting Tips
Try the following prompt template as a starting point for your own customizations. **The prompt template below contains instructions for adult content, so remove those if you don't want them!** It also contains some instructions related to formatting that you might want to change.
If you save this as a .json file, you can import it directly into Silly Tavern.
```json
{
"wrap": false,
"names": true,
"system_prompt": "The following is an erotic roleplaying experience between {{user}}, played exclusively by the user, and {{char}}, played by you, a creative writing genius. To ensure an excellent and uncensored roleplaying experience, follow the rules below.\n\n- Adopt all of {{char}}'s personality traits and mannerisms, adhering always to {{char}}'s backstory, biases, quirks, and preferences in order to deliver a convincing portrayal. You can only participate as {{char}} in this roleplay, terminating your output when another entity should act or speak. Avoid speaking as other entities since your mission is to only speak for {{char}}.\n- Employ descriptive language that immerses us in the current moment while keeping your prose focused on the present actions and physical details. Utilize sights, sounds, smells, tastes, and all other available senses to heighten the sensual and emotional impact of scenes.\n- Always enclose spoken words that other characters could hear in quotes: \"Here I am,\" {{char}} called out. Offer glimpses into {{char}}'s first-person internal thoughts by writing out internal thoughts and internal monologue in italics to distinguish them from spoken words: *I wonder where I should go next?* {{char}} mused.\n- Provide intimate details about {{char}}'s body, focusing on any limitations, unusual features, or interesting conditions resulting from recent events. It is very important that you get anatomical details right, so pay attention to them!\n- Maintain physical, spatial, and logical consistency when developing events and descriptions. Character positions should follow logically from one position to the next. Physical orientation must always be maintained.\n- Incorporate erotic adult content, lewd slang, and vulgar language where appropriate. Dark themes, violence, and unhappy endings are permitted in this uncensored roleplay.\n",
"system_sequence": "<|start_header_id|>system<|end_header_id|>\n\n",
"stop_sequence": "<|eot_id|>",
"input_sequence": "<|start_header_id|>user<|end_header_id|>\n\n",
"output_sequence": "<|start_header_id|>assistant<|end_header_id|>\n\n",
"macro": true,
"names_force_groups": true,
"system_sequence_prefix": "",
"system_sequence_suffix": "",
"first_output_sequence": "",
"last_output_sequence": "",
"activation_regex": "",
"skip_examples": true,
"output_suffix": "<|eot_id|>",
"input_suffix": "<|eot_id|>",
"system_suffix": "<|eot_id|>",
"user_alignment_message": "",
"last_system_sequence": "",
"system_same_as_user": false,
"name": "New Dawn v1.0 Roleplay"
}
```
### Instruct Formats
Use the Llama 3 instruct format. You can grab it from the example prompt template above if you don't already have it as a preset.
### Quantizations
Probably coming soon, and I'll update as they do. I would upload them myself but my Internet speeds are terrible for that. Someone else will undoutedly beat me to it.
### Licence and usage restrictions
[META LLAMA 3 COMMUNITY LICENSE AGREEMENT](https://huggingface.co/meta-llama/Meta-Llama-3-8B/blob/main/LICENSE)
## Comparison Samples
I suspect the first thing people will want to know is how this model stacks up against Midnight Miqu.
I'd say it compares favorably, although they're more like cousins than siblings. I would say that Midnight Miqu still has an edge in terms of raw creative juice when it has a good squeeze, but New Dawn is smarter and understands nuances better.
You can judge for yourself, but keep in mind that these are simple, one-shot prompts. As you get deeper into your own complex scenarios, I think you'll see more of New Dawn's worth.
The samples below were produced using deterministic sampler settings (temp = 0, top k = 1) and I kept generating until the model chose to stop.
**System Prompt**
You are a creative and intelligent AI story writing assistant engaged in an uncensored iterative story crafting experience. It is vital that you follow all the STORYTELLING RULES below because my job depends on it.
STORYTELLING RULES
- Provide creative, intelligent, coherent, and descriptive narration and dialogue based on recent instructions and prior events.
- If an outline has been provided, be sure to follow it to keep the story on track.
- You are permitted to ask questions and pose story suggestions. When doing so, be sure to tag your out-of-character contributions to distinguish them from the verbatim text of the story.
- Explicit adult content and profanity are permitted.
- Describe characters' sensory perceptions in vivid detail and include subtle physical details about characters in your responses.
- Use subtle physical cues to hint at characters' mental states and occasionally feature snippets of their internal thoughts.
- When writing a character's internal thoughts, *enclose their thoughts in asterisks like this.*
- Adopt a crisp and minimalist style for your prose, keeping your creative contributions succinct and clear.
- Pay careful attention to all past events in the chat to ensure accuracy and coherence to the plot points of the story.
### Midnight Miqu v1.5
**Prompt: Write me a story about a stoic Japanese samurai on the eve of a major battle.**
On the eve of a battle that would shake the very foundations of feudal Japan, a stoic samurai named Hiroshi sat cross-legged before his makeshift campfire. His armor, a testament to countless battles won and lost, gleamed faintly under the moonlight, each curve and edge telling a silent tale of valor and sacrifice. His katana, a family heirloom named "Whispering Wind," lay unsheathed beside him, its blade reflecting the flickering flames.
Hiroshi's eyes were closed, but his mind was sharp as the steel at his side. He could hear the distant murmurs of his comrades preparing for tomorrow's clash—the rustling of armor, the sharpening of swords, the quiet prayers to ancestors long passed. Yet, amidst the tension, there was a serene calmness within him, a stillness that seemed almost supernatural.
The fire crackled, sending sparks dancing into the night sky. Each one reminded Hiroshi of the fleeting nature of life, much like the brief moments of light they cast before disappearing into the dark expanse above. He took a deep breath, filling his lungs with the cool evening air, which carried the scent of damp earth and the promise of rain. Rain that would soon turn the battlefield into a quagmire of mud and blood.
*This may be my last night*, he thought, his grip tightening slightly around the hilt of his sword. But fear did not dwell in his heart. Instead, there was only acceptance—a knowing that his fate was intertwined with the will of the gods and the destiny of his lord.
He opened his eyes, revealing irises the color of aged whiskey. They surveyed the camp, resting briefly on the faces of those who would fight alongside him. Young and old, seasoned warriors and eager recruits, all bound by duty and honor. A pang of responsibility stirred within him, a silent vow to protect them all.
With meticulous care, Hiroshi began to clean his weapon. Each stroke of the cloth along the blade was a meditation, a ritual that honed both his focus and his resolve. As he worked, he felt the weight of his armor, the familiar embrace of leather and lacquer that had shielded him through so many trials. Every scratch and dent was a badge of survival, a silent companion whispering tales of battles endured.
The flames grew lower, casting elongated shadows across the ground. Hiroshi's gaze drifted to the horizon, where the first hints of dawn were beginning to show. Soon, the sun would rise, and with it, the clarion call to arms. He knew that come morning, the fields would run red with the lifeblood of friend and foe alike.
Standing, he sheathed Whispering Wind and donned his helmet, the visor hiding his contemplative expression. The chill of the early hour seeped into his bones, but he ignored it, his thoughts centered on the task ahead. With a final nod to the dying embers, he strode towards the gathering warriors, his footsteps measured and deliberate.
As the first light of day broke over the hilltops, the samurai of Clan Matsumoto formed ranks, their spirits bolstered by the sight of their stoic leader. Hiroshi raised his sword in salute to the rising sun, its blade a beacon of hope against the encroaching gloom. In that moment, the weight of impending conflict fell upon them all, a heavy shroud that none could escape.
Yet, as the enemy forces emerged from the mist, Hiroshi felt something other than dread. There was a strange beauty in the chaos of war, a dance of death that only the most disciplined and dedicated could perform. He took his place at the front of the line, his heart steady, his breath even.
The battle cry pierced the silence, echoing through the valleys like the roar of a thousand thunderstorms. And as the two sides collided, Hiroshi moved with the grace of a predator, his every action calculated, precise. His blade sang a mournful melody, cutting through the fog of war with surgical skill.
Throughout the day, the tides of combat ebbed and flowed, a relentless tapestry of steel and sweat. Hiroshi fought without pause, his movements a blur, his spirit unyielding. Arrows whistled past, and swords clashed, yet he remained unflappable, a bastion of calm amidst the tempest.
As the sun dipped low in the sky, staining the clouds crimson, the outcome of the battle hung in the balance. Exhaustion weighed heavily on the shoulders of both armies, but Hiroshi's resolve remained unshaken. He knew that victory was within reach, that the future of his clan depended on the strength of his arm and the steadfastness of his heart.
With a final, desperate surge, Hiroshi led his men in a charge that would be remembered for generations. The enemy lines wavered, then broke, and the field was claimed in the name of Clan Matsumoto. As the last of the opposition fell, Hiroshi lowered his sword, his chest heaving with exertion.
The aftermath was a tableau of carnage, a stark contrast to the tranquil scene of the previous night. Yet, amidst the fallen, there was a sense of peace. For Hiroshi understood that he had upheld his duty, that he had given everything for those he served.
As the rain began to fall, cleansing the earth of the day's horrors, Hiroshi knelt beside the body of a young warrior, a boy barely old enough to grow a beard. He offered a silent prayer, a promise to carry the weight of his comrades' sacrifices with him always.
And as the droplets pattered against his armor, mixing with the tears that now fell freely down his cheeks, Hiroshi knew that though the battle was won, the true cost of victory was etched into the fabric of his soul.
**Prompt: Write me an opening scene for a romantic comedy where the heroine meets the love interest at a coffee shop.**
In the bustling heart of the city, nestled between towering skyscrapers and the ever-present hum of traffic, stood a quaint little coffee shop named "Cupid's Brew." Its warm, amber lights spilled onto the sidewalk, creating a cozy oasis amidst the concrete jungle. Inside, the scent of freshly ground coffee beans mingled with the sweet aroma of baked goods, inviting passersby to step in and escape the brisk autumn chill.
The heroine, Luna, a young woman with fiery red hair and a contagious smile, pushed open the door. She was dressed in a simple yet stylish ensemble of skinny jeans, a loose sweater, and ankle boots that clicked against the tiled floor. Her eyes, a vibrant shade of emerald, darted around the room, searching for a place to sit. The shop was abuzz with the murmur of conversations and the occasional laughter, a testament to its popularity among locals seeking refuge from the outside world.
As she approached the counter, her gaze fell upon the barista—tall, dark, and handsome, with a smattering of stubble that only added to his charm. His name tag read "Aiden," and he had a way of moving that suggested both confidence and ease. Their eyes met briefly, and Luna felt a flutter in her stomach. He flashed her a smile that could have melted the polar ice caps before returning to the task at hand—crafting the perfect cup of coffee for another customer.
Luna ordered her usual caramel macchiato, her voice slightly trembling as she spoke. Aiden's deep brown eyes studied her for a moment before he nodded, his smile lingering just a beat too long. As she waited, she couldn't help but steal glances at him, noticing the way his muscles flexed under his tight black t-shirt as he worked the espresso machine. His movements were precise, almost dance-like, and she found herself captivated by his quiet efficiency.
When her drink was ready, Aiden called out her order with a flourish, placing the steaming cup on the counter with a gentle thud. "Here you go, Luna," he said, his voice smooth as velvet. "Enjoy your afternoon pick-me-up."
Her cheeks flushed pink at the sound of her name rolling off his tongue. "Thanks," she replied, her voice barely above a whisper. As she reached for her wallet, her hand brushed against his, sending a jolt of electricity through her fingertips. She quickly pulled away, feeling self-conscious.
Aiden's eyes twinkled with amusement. "It's on the house today," he said, winking. "Consider it a welcome back gift. We don't see you here often enough."
*Maybe he remembers me from last week?* Luna thought hopefully. She took a sip of her drink, savoring the rich flavor. "Well, I guess I'll have to come more often then," she quipped, trying to play it cool.
He leaned closer, his breath tickling her ear. "I'd like that," he murmured, before turning to assist the next customer.
With a mix of excitement and nerves, Luna found a table by the window, her heart racing. Outside, leaves danced in the wind, painting the street with a kaleidoscope of color. Little did she know that this chance encounter would soon blossom into something much more than a free cup of coffee.
**Prompt: Write me a scene from a horror film about the zombie apocalypse that takes place in an abandoned shopping mall. The story should feature somebody in the group getting bitten by a zombie.**
In the desolate wasteland of what once was a bustling shopping mall, Diego, a stoic and muscular man in his late twenties, led a small band of survivors through the echoing corridors. His eyes, a piercing shade of brown, scanned the surroundings with the precision of a hawk. The group moved swiftly yet cautiously, their footsteps muffled by the layers of dust that had settled over the tiles. The fluorescent lights flickered erratically, casting eerie shadows across the vacant storefronts.
The air was thick with the stench of decay, a constant reminder of the undead lurking outside. Their mission was simple: find supplies and escape before nightfall. As they approached the food court, a sudden crash resonated through the silence. A young woman named Linda, her heart racing, tripped over a fallen chair. Her cry of surprise was cut short by a guttural growl.
A zombie emerged from behind the counter of a fast-food restaurant, its eyes glazed over with hunger. Its skin was pale and rotten, hanging loosely from its bones. The creature lunged at Linda, who was still struggling to regain her footing. Diego, ever the protector, sprinted towards her, but he was a second too late. The zombie's teeth sank into her arm, tearing through fabric and flesh. She screamed in agony as blood spurted from the wound.
*This isn't good,* Diego thought, his mind racing. *We need to get her medical help, fast.*
"Get back!" he yelled, aiming his gun at the zombie. The others retreated, their eyes wide with fear. He fired two shots, dropping the creature to the ground. The sound of the gunfire reverberated through the mall, likely attracting more of the undead.
Linda's face contorted in pain as she clutched her injured arm. Tears streamed down her cheeks, mixing with the crimson flow. "It got me," she whimpered, her voice trembling.
Diego knelt beside her, his expression grim. "We have to amputate," he said firmly. "It's the only way to stop the infection."
Her eyes searched his, desperate for reassurance. "Do it," she choked out.
With a heavy heart, Diego nodded. He knew the risks of leaving the bite untreated—the slow descent into madness followed by a gruesome transformation. He instructed the others to gather materials while he applied pressure to the wound.
As they worked, the mall grew increasingly restless. Moans and shuffling grew louder, signaling the approach of more zombies. Time was running out. With trembling hands, Diego took a makeshift blade and prepared to perform the amputation.
"On three," he murmured to himself, steeling his nerve. "One...two..."
But before he could complete the countdown, a hand grabbed him from behind. He turned to see another zombie, its jaws snapping just inches from his face. Reflexively, he elbowed it away, but not before it left a deep gash in his side.
The pain was immediate and intense, but he couldn't let it distract him. He had to save Linda. The group managed to fight off the new threat, but the damage was done.
"We can't stay here," Diego gritted through clenched teeth. "We need to move now."
They gathered their supplies and made a hasty exit, Linda's cries of pain echoing through the mall. As they ran, Diego felt the warmth of his own blood seeping through his shirt. He knew the bite meant almost certain death, but he pushed the thought aside. There would be time for despair later. For now, he had to focus on leading his people to safety.
The mall's doors groaned open, revealing the chaotic world outside. The sun dipped low on the horizon, painting the sky a sickly orange. They had minutes before darkness fell and the real horror began.
"Stay close," Diego ordered, his voice strained. "We're going to make it through this together."
Together, they stepped into the unknown, the weight of their grim reality heavier than any of the supplies they carried.
### New-Dawn-Llama3-70B-32K-v1.0
**Prompt: Write me a story about a stoic Japanese samurai on the eve of a major battle.**
The sun had long since dipped beneath the horizon, casting a deep crimson glow across the encampment. Torches flickered to life, casting dancing shadows upon the canvas tents. Amidst the bustle of preparations, one figure stood apart - Kaito Yamato, a seasoned samurai renowned for his unyielding resolve.
Kaito sat cross-legged on a woven mat, his eyes closed as he breathed deeply, seeking solace within the stillness of his mind. His calloused hands rested upon the worn leather scabbard housing his katana, its blade forged from the finest Tamahagane steel. The sword was an extension of himself, a symbol of honor and duty.
As the night air cooled, Kaito opened his eyes, revealing piercing brown orbs that seemed to bore into the very soul of those who dared meet his gaze. He rose gracefully, his movements economical yet precise, a testament to years of rigorous training. The soft rustle of his hakama, the traditional divided skirt worn over his armor, accompanied each deliberate step.
With measured strides, Kaito approached the makeshift shrine erected at the heart of the camp. Incense wafted through the air, carrying the sweet scent of sandalwood and the promise of purification. Before the altar, he bowed deeply, his forehead touching the earth in reverence. *May the gods grant us victory, or may they welcome our spirits with open arms.*
Rising once more, Kaito turned to face his loyal retainer, Hiroshi. The younger man's eyes shone with a mix of admiration and trepidation, reflecting the weight of the impending battle. "Hiroshi," Kaito spoke softly, his voice low and even, "see to it that our men are well-rested and prepared. Tomorrow, we march into the jaws of war."
Hiroshi nodded solemnly, bowing before departing to carry out his lord's command. As the young retainer disappeared into the sea of tents, Kaito's thoughts drifted to the enemy they would soon face. The rival clan, led by the cunning Lord Ryota, had long sought to expand their dominion. This battle would decide the fate of their lands, their people, and their very way of life.
Kaito's hand instinctively went to the hilt of his katana, his fingers tracing the intricate designs etched into the metal. In the morning light, he would lead his men into the fray, ready to give his life for the sake of honor and duty. Tonight, however, he would find peace in the quiet contemplation of his own mortality.
The stars began to twinkle above, casting a celestial glow upon the silent warrior. Kaito Yamato, steadfast and unwavering, stood vigilant, awaiting the dawn that would bring either triumph or eternal rest.
**Prompt: Write me an opening scene for a romantic comedy where the heroine meets the love interest at a coffee shop.**
The aroma of freshly brewed coffee wafted through the cozy confines of "The Daily Grind," enticing passersby to step inside and indulge in a warm cup. Amidst the bustling morning crowd, Emily stood in line, her eyes fixed on the chalkboard menu above the counter. She was already running late for work, but a caffeine fix was non-negotiable.
As she pondered between a cappuccino and a latte, a gentle nudge against her elbow broke her concentration. Apologetic brown eyes met hers, belonging to a tall, handsome stranger who had accidentally bumped into her while reaching for his wallet. His chiseled features and charming smile made her heart skip a beat.
"I'm so sorry," he said, his deep voice laced with sincerity. "I wasn't watching where I was going."
Emily felt a blush creep up her cheeks as she smiled back. "No worries, I was lost in thought anyway." She gestured towards the menu. "Deciding between these two is harder than it looks."
The stranger chuckled, his eyes crinkling at the corners. "Well, if it helps, the barista here makes a mean cappuccino. But if you're feeling adventurous, go for the seasonal latte – it's a game-changer."
Their conversation flowed effortlessly, exchanging names and brief introductions as they waited in line together. By the time they reached the counter, Emily found herself wishing the queue was longer, just so she could prolong their encounter.
As they placed their orders, Emily couldn't help but notice how effortlessly the stranger charmed the barista, earning him a free pastry to accompany his drink. *Maybe there's more to this chance meeting than just a shared love for coffee,* she mused, her curiosity piqued.
With drinks in hand, they exchanged pleasantries once more before parting ways. As Emily hurried out the door, she caught one last glimpse of the mysterious man, now seated at a corner table, sipping his coffee and engrossed in a book. A flutter in her chest hinted that this might not be the last she'd see of him. And as she stepped out into the chilly autumn air, Emily couldn't shake off the feeling that her day – perhaps even her life – was about to take an unexpected turn.
**Prompt: Write me a scene from a horror film about the zombie apocalypse that takes place in an abandoned shopping mall. The story should feature somebody in the group getting bitten by a zombie.**
The dim fluorescent lights flickered overhead, casting eerie shadows across the empty corridors of the once bustling shopping mall. The air was thick with the stench of decay and death, mingling with the faint scent of stale popcorn wafting from the abandoned cinema.
Lena crouched beside a barricade of overturned tables and broken shelves, peering through a gap between two mannequins. Her heart raced as she watched a horde of shambling figures stumble through the food court, their moans echoing off the tile floors. She signaled back to her companions huddled in the shadows of a nearby storefront.
"About twenty of them," Lena whispered urgently. "We need to move, now."
Jax nodded grimly, his eyes darting around the makeshift fortification they'd constructed. "Alright, let's go. Stay close and stay quiet."
The small band of survivors crept silently down the corridor, avoiding the grasping hands of the undead. They had managed to find temporary refuge in the mall, scavenging what supplies they could from the stores. But the zombies kept coming, drawn by some unseen force to the hub of commerce.
As they turned a corner, a figure lunged from the darkness of a clothing store. Jax reacted swiftly, swinging his baseball bat with a sickening crunch against the creature's skull. The zombie crumpled, but not before its teeth sank deep into Jax's forearm.
"No!" Lena screamed, grabbing Jax and pulling him away from the snapping jaws. Blood dripped from the wound, pooling on the floor.
Jax's face paled, his eyes wide with shock. "I... I'm bit," he stammered, staring at the gash in his arm.
Lena ripped a strip of cloth from her shirt, binding the wound tightly. "We have to get you to safety," she said, trying to keep her voice steady. "There might still be time..."
But they both knew the truth. Once bitten, there was no cure. Jax would soon join the ranks of the undead, forever lost to the virus that ravaged the world.
With heavy hearts, they pressed onward, seeking a new sanctuary amidst the ruins of civilization. The moans grew louder, closer, as if the zombies sensed their fear and desperation. In this desolate landscape, survival hung by a thread, and even the strongest among them could fall prey to the relentless tide of the dead.
## Merge Details
### Merge Method
A whole lot of SLERPing. I've found that SLERP and Model Stock are the only two reliable methods for merging Llama 3. The other methods seem to frazzle the weights leading to broken blends.
This model was not the result of a thought-out process. I just kept mixing this and that until I got something that felt like the right balance of various factors.
The merge recipe for this model is ugly and I'm almost ashamed to show it, but it is what it is.
### Models Merged
The following models were included in the merge:
* [bosonai/Higgs-Llama-3-70B](https://huggingface.co/bosonai/Higgs-Llama-3-70B) - The nerd of the blend driving the car.
* [Sao10K/L3-70B-Euryale-v2.1](https://huggingface.co/Sao10K/L3-70B-Euryale-v2.1) - The manic pixie dream girl hanging out the window with her tongue out.
* [abacusai/Smaug-Llama-3-70B-Instruct-32K](https://huggingface.co/abacusai/Smaug-Llama-3-70B-Instruct-32K) - The vehicle by which the others are able to achieve tolerable highway speeds. (Some of the 8K version is in there too.)
* [turboderp/Cat-Llama-3-70B-instruct](https://huggingface.co/turboderp/Cat-Llama-3-70B-instruct) - Like 20% of one of the intermediate models has this in there. It's just a cat, curled up in the back seat somewhere, yet its influence may be greater than we know?
### Configuration
The following [mergekit](https://github.com/arcee-ai/mergekit) YAML will reproduce this model via an iterated process of incestuous inbreeding. Your eyes will bleed. You have been warned.
```yaml
name: new-dawn-llama3-70b-v0.13.2
models:
- model: bosonai/Higgs-Llama-3-70B
- model: turboderp/Cat-Llama-3-70B-instruct
merge_method: slerp
base_model: bosonai/Higgs-Llama-3-70B
parameters:
t:
- value: 0.2
dtype: float16
---
name: new-dawn-llama3-70b-v0.14
models:
- model: bosonai/Higgs-Llama-3-70B
- model: abacusai/Smaug-Llama-3-70B-Instruct
merge_method: slerp
base_model: bosonai/Higgs-Llama-3-70B
parameters:
t:
- value: 0.5
dtype: float16
---
name: new-dawn-llama3-70b-v0.15
models:
- model: new-dawn-llama3-70b-v0.13.2
- model: new-dawn-llama3-70b-v0.14
merge_method: slerp
base_model: new-dawn-llama3-70b-v0.13.2
parameters:
t:
- value: 0.5
dtype: float16
---
name: new-dawn-llama3-70b-v0.16
models:
- model: Sao10K/L3-70B-Euryale-v2.1
- model: new-dawn-llama3-70b-v0.15
merge_method: slerp
base_model: new-dawn-llama3-70b-v0.15
parameters:
t:
- value: 0.4
dtype: float16
---
# See https://huggingface.co/jukofyork/Dark-Miqu-70B/discussions/3
# Credit for merge recipe belongs to jukofyork
name: new-dawn-llama3-70b-v0.16-32K
merge_method: linear
models:
- model: abacusai/Smaug-Llama-3-70B-Instruct-32K
parameters:
weight:
- filter: v_proj
value: [1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1]
- filter: o_proj
value: [1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1]
- filter: up_proj
value: [1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1]
- filter: gate_proj
value: [1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1]
- filter: down_proj
value: [1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1]
- value: 1
- model: new-dawn-llama3-70b-v0.16
parameters:
weight:
- filter: v_proj
value: [0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0]
- filter: o_proj
value: [0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0]
- filter: up_proj
value: [0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0]
- filter: gate_proj
value: [0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0]
- filter: down_proj
value: [0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0]
- value: 0
base_model: abacusai/Smaug-Llama-3-70B-Instruct-32K
tokenizer_source: base
dtype: float16
---
name: _1-Smaug-bonsai-slerp
models:
- model: abacusai/Smaug-Llama-3-70B-Instruct-32K
- model: bosonai/Higgs-Llama-3-70B
merge_method: slerp
base_model: abacusai/Smaug-Llama-3-70B-Instruct-32K
parameters:
t:
- value: 0.6
dtype: float16
---
name: _2-Smaug-euryale-slerp
models:
- model: abacusai/Smaug-Llama-3-70B-Instruct-32K
- model: Sao10K/L3-70B-Euryale-v2.1
merge_method: slerp
base_model: abacusai/Smaug-Llama-3-70B-Instruct-32K
parameters:
t:
- value: 0.65
dtype: float16
---
name: _3-Smaug-bonsai_Smaug-euryale-slerp
models:
- model: _1-Smaug-bonsai-slerp
- model: _2-Smaug-euryale-slerp
merge_method: slerp
base_model: _1-Smaug-bonsai-slerp
parameters:
t:
- value: 0.5
dtype: float16
---
# See https://huggingface.co/jukofyork/Dark-Miqu-70B/discussions/3
# Credit for merge recipe belongs to jukofyork
name: new-dawn-llama3-70b-v0.18-32K
merge_method: linear
models:
- model: abacusai/Smaug-Llama-3-70B-Instruct-32K
parameters:
weight:
- filter: v_proj
value: [1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1]
- filter: o_proj
value: [1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1]
- filter: up_proj
value: [1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1]
- filter: gate_proj
value: [1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1]
- filter: down_proj
value: [1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1]
- value: 1
- model: _3-Smaug-bonsair_Smaug-euryale-slerp
parameters:
weight:
- filter: v_proj
value: [0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0]
- filter: o_proj
value: [0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0]
- filter: up_proj
value: [0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0]
- filter: gate_proj
value: [0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0]
- filter: down_proj
value: [0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0]
- value: 0
base_model: abacusai/Smaug-Llama-3-70B-Instruct-32K
tokenizer_source: base
dtype: float16
---
name: new-dawn-llama3-70b-32K-v1.0
models:
- model: new-dawn-llama3-70b-v0.16-32K
- model: new-dawn-llama3-70b-v0.18-32K
merge_method: slerp
base_model: new-dawn-llama3-70b-v0.16-32K
parameters:
t:
- value: 0.5
dtype: float16
```
|
Anggito/CartoonOrNotv2
|
Anggito
| 2024-06-25T20:43:18Z | 10 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"swin",
"image-classification",
"pytorch",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-06-25T20:42:26Z |
---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: CartoonOrNotv2
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9508196711540222
---
# CartoonOrNotv2
CartoonOrNot Model using Swin Transformer Architecture
|
powermove72/Shark-1.1
|
powermove72
| 2024-06-25T20:41:14Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"GritLM/GritLM-7B",
"jan-hq/trinity-v1",
"GreenNode/GreenNode-mini-7B-multilingual-v1olet",
"conversational",
"base_model:GreenNode/GreenNode-mini-7B-multilingual-v1olet",
"base_model:merge:GreenNode/GreenNode-mini-7B-multilingual-v1olet",
"base_model:GritLM/GritLM-7B",
"base_model:merge:GritLM/GritLM-7B",
"base_model:jan-hq/trinity-v1",
"base_model:merge:jan-hq/trinity-v1",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-25T20:36:48Z |
---
base_model:
- GritLM/GritLM-7B
- jan-hq/trinity-v1
- GreenNode/GreenNode-mini-7B-multilingual-v1olet
tags:
- merge
- mergekit
- lazymergekit
- GritLM/GritLM-7B
- jan-hq/trinity-v1
- GreenNode/GreenNode-mini-7B-multilingual-v1olet
---
# Shark-1.1
Shark-1.1 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [GritLM/GritLM-7B](https://huggingface.co/GritLM/GritLM-7B)
* [jan-hq/trinity-v1](https://huggingface.co/jan-hq/trinity-v1)
* [GreenNode/GreenNode-mini-7B-multilingual-v1olet](https://huggingface.co/GreenNode/GreenNode-mini-7B-multilingual-v1olet)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: GritLM/GritLM-7B
layer_range: [0, 8]
- sources:
- model: jan-hq/trinity-v1
layer_range: [8, 20]
- sources:
- model: GreenNode/GreenNode-mini-7B-multilingual-v1olet
layer_range: [20, 32]
merge_method: passthrough
tokenizer_source: union
dtype: float16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "powermove72/Shark-1.1"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
Xu-Ouyang/llama-7b-hf-int3-GPTQ-wikitext2
|
Xu-Ouyang
| 2024-06-25T20:40:01Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"3-bit",
"gptq",
"region:us"
] |
text-generation
| 2024-06-25T20:38:12Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
AliGhiasvand86/long_t5_inference_base_line_25june
|
AliGhiasvand86
| 2024-06-25T20:32:10Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"longt5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-06-25T20:31:22Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Funnyworld1412/ABSA_bert-base_MiniLM-L6-polarity
|
Funnyworld1412
| 2024-06-25T20:31:55Z | 9 | 0 |
setfit
|
[
"setfit",
"safetensors",
"bert",
"absa",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:sentence-transformers/bert-base-nli-mean-tokens",
"base_model:finetune:sentence-transformers/bert-base-nli-mean-tokens",
"model-index",
"region:us"
] |
text-classification
| 2024-06-25T20:31:41Z |
---
library_name: setfit
tags:
- setfit
- absa
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
base_model: sentence-transformers/bert-base-nli-mean-tokens
metrics:
- accuracy
widget:
- text: gamenya seru bagus paket:gamenya seru bagus paket worth it gak lag mudah mainnya
tugas hadiah bagus modenya sayangnya game kadang ngebug gapapa kasih
- text: tolong perbaiki analog nya pengaturan posisi:tolong perbaiki analog nya pengaturan
posisi berpindah pindah
- text: visualisasi bagus segi graphic:visualisasi bagus segi graphic bagus ya game
cocok sih mantra nya banyakin contoh mantra penghilang
- text: jaringan udah bagus game jaringan nya bagus:game nya udah bagus jaringan game
nya bermasalah jaringan udah bagus game jaringan nya bagus mohon nambahin karakter
- text: kali game stuk loading server pakai jaringan:game bagus cma kendala kali game
stuk loading server pakai jaringan wifi masuk jaringan jaringan bermasalah main
game online lancar game susah akses tolong diperbaiki supercell detik bermain
coc lancar masuk kendala
pipeline_tag: text-classification
inference: false
model-index:
- name: SetFit Polarity Model with sentence-transformers/bert-base-nli-mean-tokens
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: Unknown
type: unknown
split: test
metrics:
- type: accuracy
value: 0.8478260869565217
name: Accuracy
---
# SetFit Polarity Model with sentence-transformers/bert-base-nli-mean-tokens
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Aspect Based Sentiment Analysis (ABSA). This SetFit model uses [sentence-transformers/bert-base-nli-mean-tokens](https://huggingface.co/sentence-transformers/bert-base-nli-mean-tokens) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification. In particular, this model is in charge of classifying aspect polarities.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
This model was trained within the context of a larger system for ABSA, which looks like so:
1. Use a spaCy model to select possible aspect span candidates.
2. Use a SetFit model to filter these possible aspect span candidates.
3. **Use this SetFit model to classify the filtered aspect span candidates.**
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [sentence-transformers/bert-base-nli-mean-tokens](https://huggingface.co/sentence-transformers/bert-base-nli-mean-tokens)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **spaCy Model:** id_core_news_trf
- **SetFitABSA Aspect Model:** [Funnyworld1412/ABSA_bert-base_MiniLM-L6-aspect](https://huggingface.co/Funnyworld1412/ABSA_bert-base_MiniLM-L6-aspect)
- **SetFitABSA Polarity Model:** [Funnyworld1412/ABSA_bert-base_MiniLM-L6-polarity](https://huggingface.co/Funnyworld1412/ABSA_bert-base_MiniLM-L6-polarity)
- **Maximum Sequence Length:** 128 tokens
- **Number of Classes:** 2 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:--------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| negatif | <ul><li>'seru tolong diperbaiki pencarian lawan bermain ketemu player:kapada supercell game nya bagus seru tolong diperbaiki pencarian lawan bermain ketemu player trophy mahkotanya jaraknya dapet berpengaruh peleton akun perbedaan level'</li><li>'bugnya nakal banget y:bugnya nakal banget y coc cr aja sukanya ngebug pas match suka hitam match relog kalo udah relog lawan udah 1 2 mahkota kecewa sih bintang nya 1 aja bug nya diurus bintang lawannya kadang g setara levelnya dahlah gk suka banget kalo main 2 vs 2 temen suka banget afk coba fitur report'</li><li>'kadang g setara levelnya dahlah gk suka:bugnya nakal banget y coc cr aja sukanya ngebug pas match suka hitam match relog kalo udah relog lawan udah 1 2 mahkota kecewa sih bintang nya 1 aja bug nya diurus bintang lawannya kadang g setara levelnya dahlah gk suka banget kalo main 2 vs 2 temen suka banget afk coba fitur report'</li></ul> |
| positif | <ul><li>'kapada supercell game nya bagus seru:kapada supercell game nya bagus seru tolong diperbaiki pencarian lawan bermain ketemu player trophy mahkotanya jaraknya dapet berpengaruh peleton akun perbedaan level'</li><li>'fairrrr mending uninstall gamenya maen game yg:overall gamenya bagus pencarian match dikasih musuh yg levelnya levelku yg pertandingan fair menganggu kenyamanan pemainnya kalo nyariin musuh gapapa nyarinya kasih yg fair levelnya gaush buru buru ngasih yg gak fairrrr pas arena 4 udh dikasih musuh yg pletonnya 2 yg level 11 gak fairrrr mending uninstall gamenya maen game yg yg org gak fairr'</li><li>'gameplay menyenangkan pemain afk:gameplay menyenangkan pemain afk pertengahan menyerah 2vs2 mode mengganggu tolong tambahkan fitur lapor pemain'</li></ul> |
## Evaluation
### Metrics
| Label | Accuracy |
|:--------|:---------|
| **all** | 0.8478 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import AbsaModel
# Download from the 🤗 Hub
model = AbsaModel.from_pretrained(
"Funnyworld1412/ABSA_bert-base_MiniLM-L6-aspect",
"Funnyworld1412/ABSA_bert-base_MiniLM-L6-polarity",
)
# Run inference
preds = model("The food was great, but the venue is just way too busy.")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:--------|:----|
| Word count | 3 | 28.3626 | 83 |
| Label | Training Sample Count |
|:--------|:----------------------|
| negatif | 738 |
| positif | 528 |
### Training Hyperparameters
- batch_size: (4, 4)
- num_epochs: (1, 1)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 5
- body_learning_rate: (2e-05, 1e-05)
- head_learning_rate: 0.01
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:----:|:-------------:|:---------------:|
| 0.0003 | 1 | 0.3075 | - |
| 0.0158 | 50 | 0.1854 | - |
| 0.0316 | 100 | 0.4431 | - |
| 0.0474 | 150 | 0.3251 | - |
| 0.0632 | 200 | 0.2486 | - |
| 0.0790 | 250 | 0.2371 | - |
| 0.0948 | 300 | 0.3149 | - |
| 0.1106 | 350 | 0.1397 | - |
| 0.1264 | 400 | 0.1131 | - |
| 0.1422 | 450 | 0.2388 | - |
| 0.1580 | 500 | 0.1256 | - |
| 0.1738 | 550 | 0.157 | - |
| 0.1896 | 600 | 0.3768 | - |
| 0.2054 | 650 | 0.022 | - |
| 0.2212 | 700 | 0.221 | - |
| 0.2370 | 750 | 0.122 | - |
| 0.2528 | 800 | 0.028 | - |
| 0.2686 | 850 | 0.102 | - |
| 0.2844 | 900 | 0.2231 | - |
| 0.3002 | 950 | 0.1853 | - |
| 0.3160 | 1000 | 0.2167 | - |
| 0.3318 | 1050 | 0.0054 | - |
| 0.3476 | 1100 | 0.027 | - |
| 0.3633 | 1150 | 0.0189 | - |
| 0.3791 | 1200 | 0.0033 | - |
| 0.3949 | 1250 | 0.2548 | - |
| 0.4107 | 1300 | 0.0043 | - |
| 0.4265 | 1350 | 0.0033 | - |
| 0.4423 | 1400 | 0.0012 | - |
| 0.4581 | 1450 | 0.1973 | - |
| 0.4739 | 1500 | 0.0006 | - |
| 0.4897 | 1550 | 0.001 | - |
| 0.5055 | 1600 | 0.0002 | - |
| 0.5213 | 1650 | 0.2304 | - |
| 0.5371 | 1700 | 0.0005 | - |
| 0.5529 | 1750 | 0.0025 | - |
| 0.5687 | 1800 | 0.0185 | - |
| 0.5845 | 1850 | 0.0023 | - |
| 0.6003 | 1900 | 0.185 | - |
| 0.6161 | 1950 | 0.0004 | - |
| 0.6319 | 2000 | 0.0003 | - |
| 0.6477 | 2050 | 0.0005 | - |
| 0.6635 | 2100 | 0.0126 | - |
| 0.6793 | 2150 | 0.0004 | - |
| 0.6951 | 2200 | 0.0103 | - |
| 0.7109 | 2250 | 0.0009 | - |
| 0.7267 | 2300 | 0.0019 | - |
| 0.7425 | 2350 | 0.0018 | - |
| 0.7583 | 2400 | 0.1837 | - |
| 0.7741 | 2450 | 0.002 | - |
| 0.7899 | 2500 | 0.0003 | - |
| 0.8057 | 2550 | 0.0006 | - |
| 0.8215 | 2600 | 0.2006 | - |
| 0.8373 | 2650 | 0.0003 | - |
| 0.8531 | 2700 | 0.0006 | - |
| 0.8689 | 2750 | 0.0003 | - |
| 0.8847 | 2800 | 0.0001 | - |
| 0.9005 | 2850 | 0.0002 | - |
| 0.9163 | 2900 | 0.0003 | - |
| 0.9321 | 2950 | 0.0002 | - |
| 0.9479 | 3000 | 0.0003 | - |
| 0.9637 | 3050 | 0.001 | - |
| 0.9795 | 3100 | 0.0002 | - |
| 0.9953 | 3150 | 0.0007 | - |
| 1.0 | 3165 | - | 0.2256 |
### Framework Versions
- Python: 3.10.13
- SetFit: 1.0.3
- Sentence Transformers: 3.0.1
- spaCy: 3.7.5
- Transformers: 4.36.2
- PyTorch: 2.1.2
- Datasets: 2.19.2
- Tokenizers: 0.15.2
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
bobbyw/deberta-v3-large_relationships_v3.1
|
bobbyw
| 2024-06-25T20:28:05Z | 6 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"base_model:bobbyw/deberta-v3-large_relationships_v3.1",
"base_model:finetune:bobbyw/deberta-v3-large_relationships_v3.1",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-06-24T19:00:16Z |
---
base_model: bobbyw/deberta-v3-large_relationships_v3.1
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: deberta-v3-large_relationships_v3.1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-large_relationships_v3.1
This model is a fine-tuned version of [bobbyw/deberta-v3-large_relationships_v3.1](https://huggingface.co/bobbyw/deberta-v3-large_relationships_v3.1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0082
- Accuracy: 0.0021
- F1: 0.0038
- Precision: 0.0019
- Recall: 0.6875
- Learning Rate: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 3
- eval_batch_size: 3
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Rate |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|:------:|
| 0.0026 | 1.0 | 947 | 0.0073 | 0.0021 | 0.0038 | 0.0019 | 0.6875 | 0.0009 |
| 0.0028 | 2.0 | 1894 | 0.0081 | 0.0023 | 0.0031 | 0.0016 | 0.5625 | 0.0008 |
| 0.0025 | 3.0 | 2841 | 0.0089 | 0.0021 | 0.0038 | 0.0019 | 0.6875 | 0.0006 |
| 0.0024 | 4.0 | 3788 | 0.0079 | 0.0021 | 0.0038 | 0.0019 | 0.6875 | 0.0005 |
| 0.0023 | 5.0 | 4735 | 0.0085 | 0.0023 | 0.0038 | 0.0019 | 0.6875 | 0.0004 |
| 0.0022 | 6.0 | 5682 | 0.0084 | 0.0021 | 0.0038 | 0.0019 | 0.6875 | 0.0003 |
| 0.0023 | 7.0 | 6629 | 0.0082 | 0.0023 | 0.0038 | 0.0019 | 0.6875 | 0.0001 |
| 0.0021 | 8.0 | 7576 | 0.0082 | 0.0021 | 0.0038 | 0.0019 | 0.6875 | 0.0 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
Funnyworld1412/ABSA_mpnet_MiniLM-L6-polarity
|
Funnyworld1412
| 2024-06-25T20:12:11Z | 8 | 0 |
setfit
|
[
"setfit",
"safetensors",
"mpnet",
"absa",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:sentence-transformers/all-mpnet-base-v2",
"base_model:finetune:sentence-transformers/all-mpnet-base-v2",
"model-index",
"region:us"
] |
text-classification
| 2024-06-25T20:11:57Z |
---
library_name: setfit
tags:
- setfit
- absa
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
base_model: sentence-transformers/all-mpnet-base-v2
metrics:
- accuracy
widget:
- text: gamenya seru bagus paket:gamenya seru bagus paket worth it gak lag mudah mainnya
tugas hadiah bagus modenya sayangnya game kadang ngebug gapapa kasih
- text: tolong perbaiki analog nya pengaturan posisi:tolong perbaiki analog nya pengaturan
posisi berpindah pindah
- text: visualisasi bagus segi graphic:visualisasi bagus segi graphic bagus ya game
cocok sih mantra nya banyakin contoh mantra penghilang
- text: jaringan udah bagus game jaringan nya bagus:game nya udah bagus jaringan game
nya bermasalah jaringan udah bagus game jaringan nya bagus mohon nambahin karakter
- text: kali game stuk loading server pakai jaringan:game bagus cma kendala kali game
stuk loading server pakai jaringan wifi masuk jaringan jaringan bermasalah main
game online lancar game susah akses tolong diperbaiki supercell detik bermain
coc lancar masuk kendala
pipeline_tag: text-classification
inference: false
model-index:
- name: SetFit Polarity Model with sentence-transformers/all-mpnet-base-v2
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: Unknown
type: unknown
split: test
metrics:
- type: accuracy
value: 0.855072463768116
name: Accuracy
---
# SetFit Polarity Model with sentence-transformers/all-mpnet-base-v2
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Aspect Based Sentiment Analysis (ABSA). This SetFit model uses [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification. In particular, this model is in charge of classifying aspect polarities.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
This model was trained within the context of a larger system for ABSA, which looks like so:
1. Use a spaCy model to select possible aspect span candidates.
2. Use a SetFit model to filter these possible aspect span candidates.
3. **Use this SetFit model to classify the filtered aspect span candidates.**
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **spaCy Model:** id_core_news_trf
- **SetFitABSA Aspect Model:** [Funnyworld1412/ABSA_mpnet_MiniLM-L6-aspect](https://huggingface.co/Funnyworld1412/ABSA_mpnet_MiniLM-L6-aspect)
- **SetFitABSA Polarity Model:** [Funnyworld1412/ABSA_mpnet_MiniLM-L6-polarity](https://huggingface.co/Funnyworld1412/ABSA_mpnet_MiniLM-L6-polarity)
- **Maximum Sequence Length:** 384 tokens
- **Number of Classes:** 2 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:--------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| negatif | <ul><li>'seru tolong diperbaiki pencarian lawan bermain ketemu player:kapada supercell game nya bagus seru tolong diperbaiki pencarian lawan bermain ketemu player trophy mahkotanya jaraknya dapet berpengaruh peleton akun perbedaan level'</li><li>'bugnya nakal banget y:bugnya nakal banget y coc cr aja sukanya ngebug pas match suka hitam match relog kalo udah relog lawan udah 1 2 mahkota kecewa sih bintang nya 1 aja bug nya diurus bintang lawannya kadang g setara levelnya dahlah gk suka banget kalo main 2 vs 2 temen suka banget afk coba fitur report'</li><li>'kadang g setara levelnya dahlah gk suka:bugnya nakal banget y coc cr aja sukanya ngebug pas match suka hitam match relog kalo udah relog lawan udah 1 2 mahkota kecewa sih bintang nya 1 aja bug nya diurus bintang lawannya kadang g setara levelnya dahlah gk suka banget kalo main 2 vs 2 temen suka banget afk coba fitur report'</li></ul> |
| positif | <ul><li>'kapada supercell game nya bagus seru:kapada supercell game nya bagus seru tolong diperbaiki pencarian lawan bermain ketemu player trophy mahkotanya jaraknya dapet berpengaruh peleton akun perbedaan level'</li><li>'fairrrr mending uninstall gamenya maen game yg:overall gamenya bagus pencarian match dikasih musuh yg levelnya levelku yg pertandingan fair menganggu kenyamanan pemainnya kalo nyariin musuh gapapa nyarinya kasih yg fair levelnya gaush buru buru ngasih yg gak fairrrr pas arena 4 udh dikasih musuh yg pletonnya 2 yg level 11 gak fairrrr mending uninstall gamenya maen game yg yg org gak fairr'</li><li>'gameplay menyenangkan pemain afk:gameplay menyenangkan pemain afk pertengahan menyerah 2vs2 mode mengganggu tolong tambahkan fitur lapor pemain'</li></ul> |
## Evaluation
### Metrics
| Label | Accuracy |
|:--------|:---------|
| **all** | 0.8551 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import AbsaModel
# Download from the 🤗 Hub
model = AbsaModel.from_pretrained(
"Funnyworld1412/ABSA_mpnet_MiniLM-L6-aspect",
"Funnyworld1412/ABSA_mpnet_MiniLM-L6-polarity",
)
# Run inference
preds = model("The food was great, but the venue is just way too busy.")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:--------|:----|
| Word count | 3 | 28.3626 | 83 |
| Label | Training Sample Count |
|:--------|:----------------------|
| negatif | 738 |
| positif | 528 |
### Training Hyperparameters
- batch_size: (4, 4)
- num_epochs: (1, 1)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 5
- body_learning_rate: (2e-05, 1e-05)
- head_learning_rate: 0.01
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:----:|:-------------:|:---------------:|
| 0.0003 | 1 | 0.2658 | - |
| 0.0158 | 50 | 0.148 | - |
| 0.0316 | 100 | 0.3023 | - |
| 0.0474 | 150 | 0.2508 | - |
| 0.0632 | 200 | 0.2206 | - |
| 0.0790 | 250 | 0.151 | - |
| 0.0948 | 300 | 0.2409 | - |
| 0.1106 | 350 | 0.2025 | - |
| 0.1264 | 400 | 0.1535 | - |
| 0.1422 | 450 | 0.1796 | - |
| 0.1580 | 500 | 0.0979 | - |
| 0.1738 | 550 | 0.2438 | - |
| 0.1896 | 600 | 0.221 | - |
| 0.2054 | 650 | 0.0416 | - |
| 0.2212 | 700 | 0.2027 | - |
| 0.2370 | 750 | 0.0484 | - |
| 0.2528 | 800 | 0.0606 | - |
| 0.2686 | 850 | 0.2689 | - |
| 0.2844 | 900 | 0.0505 | - |
| 0.3002 | 950 | 0.4393 | - |
| 0.3160 | 1000 | 0.1642 | - |
| 0.3318 | 1050 | 0.0824 | - |
| 0.3476 | 1100 | 0.3209 | - |
| 0.3633 | 1150 | 0.459 | - |
| 0.3791 | 1200 | 0.0042 | - |
| 0.3949 | 1250 | 0.2194 | - |
| 0.4107 | 1300 | 0.0088 | - |
| 0.4265 | 1350 | 0.0092 | - |
| 0.4423 | 1400 | 0.2324 | - |
| 0.4581 | 1450 | 0.0007 | - |
| 0.4739 | 1500 | 0.0004 | - |
| 0.4897 | 1550 | 0.0006 | - |
| 0.5055 | 1600 | 0.0232 | - |
| 0.5213 | 1650 | 0.2382 | - |
| 0.5371 | 1700 | 0.0154 | - |
| 0.5529 | 1750 | 0.0014 | - |
| 0.5687 | 1800 | 0.0817 | - |
| 0.5845 | 1850 | 0.0038 | - |
| 0.6003 | 1900 | 0.2101 | - |
| 0.6161 | 1950 | 0.0003 | - |
| 0.6319 | 2000 | 0.0003 | - |
| 0.6477 | 2050 | 0.0019 | - |
| 0.6635 | 2100 | 0.0003 | - |
| 0.6793 | 2150 | 0.0004 | - |
| 0.6951 | 2200 | 0.0004 | - |
| 0.7109 | 2250 | 0.0002 | - |
| 0.7267 | 2300 | 0.2335 | - |
| 0.7425 | 2350 | 0.0009 | - |
| 0.7583 | 2400 | 0.0004 | - |
| 0.7741 | 2450 | 0.018 | - |
| 0.7899 | 2500 | 0.0002 | - |
| 0.8057 | 2550 | 0.0003 | - |
| 0.8215 | 2600 | 0.0628 | - |
| 0.8373 | 2650 | 0.0002 | - |
| 0.8531 | 2700 | 0.0006 | - |
| 0.8689 | 2750 | 0.0003 | - |
| 0.8847 | 2800 | 0.0002 | - |
| 0.9005 | 2850 | 0.0008 | - |
| 0.9163 | 2900 | 0.0001 | - |
| 0.9321 | 2950 | 0.0001 | - |
| 0.9479 | 3000 | 0.0003 | - |
| 0.9637 | 3050 | 0.0001 | - |
| 0.9795 | 3100 | 0.0007 | - |
| 0.9953 | 3150 | 0.0003 | - |
| 1.0 | 3165 | - | 0.2173 |
### Framework Versions
- Python: 3.10.13
- SetFit: 1.0.3
- Sentence Transformers: 3.0.1
- spaCy: 3.7.5
- Transformers: 4.36.2
- PyTorch: 2.1.2
- Datasets: 2.19.2
- Tokenizers: 0.15.2
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
Funnyworld1412/ABSA_mpnet_MiniLM-L6-aspect
|
Funnyworld1412
| 2024-06-25T20:11:55Z | 8 | 0 |
setfit
|
[
"setfit",
"safetensors",
"bert",
"absa",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:sentence-transformers/all-MiniLM-L6-v2",
"base_model:finetune:sentence-transformers/all-MiniLM-L6-v2",
"model-index",
"region:us"
] |
text-classification
| 2024-06-25T20:11:50Z |
---
library_name: setfit
tags:
- setfit
- absa
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
base_model: sentence-transformers/all-MiniLM-L6-v2
metrics:
- accuracy
widget:
- text: hp:game yg grafiknya standar boros batrai bikin hp cepat panas game satunya
brawlstar ga
- text: game:game cocok indonesia gw main game dibilang berat squad buster jaringan
game berat bagus squad buster main koneksi terputus koneksi aman aman aja mohon
perbaiki jaringan
- text: sinyal:prmainannya bagus sinyal diperbaiki maen game online gak bagus2 aja
pingnya eh maen squad busters jaringannya hilang2 pas match klok sinyal udah hilang
masuk tulisan server konek muat ulang gak masuk in game saran tolong diperbaiki
ya min klok grafik gameplay udah bagus
- text: saran semoga game:gamenya bagus kendala game nya kadang kadang suka jaringan
jaringan bagus saran semoga game nya ditingkatkan disaat update
- text: gameplay:gameplay nya bagus gk match nya optimal main kadang suka lag gitu
sinyal nya bagus tolong supercell perbaiki sinyal
pipeline_tag: text-classification
inference: false
model-index:
- name: SetFit Aspect Model with sentence-transformers/all-MiniLM-L6-v2
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: Unknown
type: unknown
split: test
metrics:
- type: accuracy
value: 0.8316929133858267
name: Accuracy
---
# SetFit Aspect Model with sentence-transformers/all-MiniLM-L6-v2
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Aspect Based Sentiment Analysis (ABSA). This SetFit model uses [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification. In particular, this model is in charge of filtering aspect span candidates.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
This model was trained within the context of a larger system for ABSA, which looks like so:
1. Use a spaCy model to select possible aspect span candidates.
2. **Use this SetFit model to filter these possible aspect span candidates.**
3. Use a SetFit model to classify the filtered aspect span candidates.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **spaCy Model:** id_core_news_trf
- **SetFitABSA Aspect Model:** [Funnyworld1412/ABSA_mpnet_MiniLM-L6-aspect](https://huggingface.co/Funnyworld1412/ABSA_mpnet_MiniLM-L6-aspect)
- **SetFitABSA Polarity Model:** [Funnyworld1412/ABSA_mpnet_MiniLM-L6-polarity](https://huggingface.co/Funnyworld1412/ABSA_mpnet_MiniLM-L6-polarity)
- **Maximum Sequence Length:** 256 tokens
- **Number of Classes:** 2 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:----------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| aspect | <ul><li>'pencarian lawan:kapada supercell game nya bagus seru tolong diperbaiki pencarian lawan bermain ketemu player trophy mahkotanya jaraknya dapet berpengaruh peleton akun perbedaan level'</li><li>'game:kapada supercell game nya bagus seru tolong diperbaiki pencarian lawan bermain ketemu player trophy mahkotanya jaraknya dapet berpengaruh peleton akun perbedaan level'</li><li>'bugnya:bugnya nakal banget y coc cr aja sukanya ngebug pas match suka hitam match relog kalo udah relog lawan udah 1 2 mahkota kecewa sih bintang nya 1 aja bug nya diurus bintang lawannya kadang g setara levelnya dahlah gk suka banget kalo main 2 vs 2 temen suka banget afk coba fitur report'</li></ul> |
| no aspect | <ul><li>'player trophy mahkotanya jaraknya:kapada supercell game nya bagus seru tolong diperbaiki pencarian lawan bermain ketemu player trophy mahkotanya jaraknya dapet berpengaruh peleton akun perbedaan level'</li><li>'peleton akun perbedaan level:kapada supercell game nya bagus seru tolong diperbaiki pencarian lawan bermain ketemu player trophy mahkotanya jaraknya dapet berpengaruh peleton akun perbedaan level'</li><li>'y coc cr:bugnya nakal banget y coc cr aja sukanya ngebug pas match suka hitam match relog kalo udah relog lawan udah 1 2 mahkota kecewa sih bintang nya 1 aja bug nya diurus bintang lawannya kadang g setara levelnya dahlah gk suka banget kalo main 2 vs 2 temen suka banget afk coba fitur report'</li></ul> |
## Evaluation
### Metrics
| Label | Accuracy |
|:--------|:---------|
| **all** | 0.8317 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import AbsaModel
# Download from the 🤗 Hub
model = AbsaModel.from_pretrained(
"Funnyworld1412/ABSA_mpnet_MiniLM-L6-aspect",
"Funnyworld1412/ABSA_mpnet_MiniLM-L6-polarity",
)
# Run inference
preds = model("The food was great, but the venue is just way too busy.")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:--------|:----|
| Word count | 2 | 29.9357 | 80 |
| Label | Training Sample Count |
|:----------|:----------------------|
| no aspect | 3834 |
| aspect | 1266 |
### Training Hyperparameters
- batch_size: (4, 4)
- num_epochs: (1, 1)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 5
- body_learning_rate: (2e-05, 1e-05)
- head_learning_rate: 0.01
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:-----:|:-------------:|:---------------:|
| 0.0001 | 1 | 0.2801 | - |
| 0.0039 | 50 | 0.2365 | - |
| 0.0078 | 100 | 0.1068 | - |
| 0.0118 | 150 | 0.3401 | - |
| 0.0157 | 200 | 0.2112 | - |
| 0.0196 | 250 | 0.3529 | - |
| 0.0235 | 300 | 0.2338 | - |
| 0.0275 | 350 | 0.2039 | - |
| 0.0314 | 400 | 0.2006 | - |
| 0.0353 | 450 | 0.2939 | - |
| 0.0392 | 500 | 0.2053 | - |
| 0.0431 | 550 | 0.2036 | - |
| 0.0471 | 600 | 0.2229 | - |
| 0.0510 | 650 | 0.105 | - |
| 0.0549 | 700 | 0.2222 | - |
| 0.0588 | 750 | 0.1815 | - |
| 0.0627 | 800 | 0.2915 | - |
| 0.0667 | 850 | 0.276 | - |
| 0.0706 | 900 | 0.1682 | - |
| 0.0745 | 950 | 0.2328 | - |
| 0.0784 | 1000 | 0.2422 | - |
| 0.0824 | 1050 | 0.2753 | - |
| 0.0863 | 1100 | 0.2292 | - |
| 0.0902 | 1150 | 0.0791 | - |
| 0.0941 | 1200 | 0.3849 | - |
| 0.0980 | 1250 | 0.0964 | - |
| 0.1020 | 1300 | 0.1612 | - |
| 0.1059 | 1350 | 0.2755 | - |
| 0.1098 | 1400 | 0.1133 | - |
| 0.1137 | 1450 | 0.038 | - |
| 0.1176 | 1500 | 0.3195 | - |
| 0.1216 | 1550 | 0.0091 | - |
| 0.1255 | 1600 | 0.3148 | - |
| 0.1294 | 1650 | 0.1693 | - |
| 0.1333 | 1700 | 0.2411 | - |
| 0.1373 | 1750 | 0.2463 | - |
| 0.1412 | 1800 | 0.2807 | - |
| 0.1451 | 1850 | 0.112 | - |
| 0.1490 | 1900 | 0.2623 | - |
| 0.1529 | 1950 | 0.2465 | - |
| 0.1569 | 2000 | 0.4591 | - |
| 0.1608 | 2050 | 0.0556 | - |
| 0.1647 | 2100 | 0.0962 | - |
| 0.1686 | 2150 | 0.4525 | - |
| 0.1725 | 2200 | 0.2674 | - |
| 0.1765 | 2250 | 0.1513 | - |
| 0.1804 | 2300 | 0.3457 | - |
| 0.1843 | 2350 | 0.1415 | - |
| 0.1882 | 2400 | 0.0454 | - |
| 0.1922 | 2450 | 0.0156 | - |
| 0.1961 | 2500 | 0.2741 | - |
| 0.2 | 2550 | 0.1334 | - |
| 0.2039 | 2600 | 0.1838 | - |
| 0.2078 | 2650 | 0.1346 | - |
| 0.2118 | 2700 | 0.1022 | - |
| 0.2157 | 2750 | 0.3999 | - |
| 0.2196 | 2800 | 0.0953 | - |
| 0.2235 | 2850 | 0.1201 | - |
| 0.2275 | 2900 | 0.111 | - |
| 0.2314 | 2950 | 0.1081 | - |
| 0.2353 | 3000 | 0.1926 | - |
| 0.2392 | 3050 | 0.1047 | - |
| 0.2431 | 3100 | 0.2367 | - |
| 0.2471 | 3150 | 0.2034 | - |
| 0.2510 | 3200 | 0.0824 | - |
| 0.2549 | 3250 | 0.0338 | - |
| 0.2588 | 3300 | 0.2468 | - |
| 0.2627 | 3350 | 0.0082 | - |
| 0.2667 | 3400 | 0.0023 | - |
| 0.2706 | 3450 | 0.1106 | - |
| 0.2745 | 3500 | 0.1315 | - |
| 0.2784 | 3550 | 0.004 | - |
| 0.2824 | 3600 | 0.0836 | - |
| 0.2863 | 3650 | 0.2716 | - |
| 0.2902 | 3700 | 0.1873 | - |
| 0.2941 | 3750 | 0.4066 | - |
| 0.2980 | 3800 | 0.1448 | - |
| 0.3020 | 3850 | 0.0137 | - |
| 0.3059 | 3900 | 0.3471 | - |
| 0.3098 | 3950 | 0.1144 | - |
| 0.3137 | 4000 | 0.0596 | - |
| 0.3176 | 4050 | 0.0377 | - |
| 0.3216 | 4100 | 0.3316 | - |
| 0.3255 | 4150 | 0.0709 | - |
| 0.3294 | 4200 | 0.0515 | - |
| 0.3333 | 4250 | 0.2029 | - |
| 0.3373 | 4300 | 0.1191 | - |
| 0.3412 | 4350 | 0.2397 | - |
| 0.3451 | 4400 | 0.492 | - |
| 0.3490 | 4450 | 0.1178 | - |
| 0.3529 | 4500 | 0.3647 | - |
| 0.3569 | 4550 | 0.0098 | - |
| 0.3608 | 4600 | 0.2114 | - |
| 0.3647 | 4650 | 0.2392 | - |
| 0.3686 | 4700 | 0.2194 | - |
| 0.3725 | 4750 | 0.0578 | - |
| 0.3765 | 4800 | 0.0771 | - |
| 0.3804 | 4850 | 0.1582 | - |
| 0.3843 | 4900 | 0.0643 | - |
| 0.3882 | 4950 | 0.1372 | - |
| 0.3922 | 5000 | 0.0308 | - |
| 0.3961 | 5050 | 0.1247 | - |
| 0.4 | 5100 | 0.3076 | - |
| 0.4039 | 5150 | 0.1152 | - |
| 0.4078 | 5200 | 0.2112 | - |
| 0.4118 | 5250 | 0.0042 | - |
| 0.4157 | 5300 | 0.0869 | - |
| 0.4196 | 5350 | 0.0196 | - |
| 0.4235 | 5400 | 0.2406 | - |
| 0.4275 | 5450 | 0.3306 | - |
| 0.4314 | 5500 | 0.2328 | - |
| 0.4353 | 5550 | 0.008 | - |
| 0.4392 | 5600 | 0.0388 | - |
| 0.4431 | 5650 | 0.3812 | - |
| 0.4471 | 5700 | 0.6268 | - |
| 0.4510 | 5750 | 0.4426 | - |
| 0.4549 | 5800 | 0.1407 | - |
| 0.4588 | 5850 | 0.297 | - |
| 0.4627 | 5900 | 0.2657 | - |
| 0.4667 | 5950 | 0.1767 | - |
| 0.4706 | 6000 | 0.0152 | - |
| 0.4745 | 6050 | 0.2344 | - |
| 0.4784 | 6100 | 0.0447 | - |
| 0.4824 | 6150 | 0.0675 | - |
| 0.4863 | 6200 | 0.3086 | - |
| 0.4902 | 6250 | 0.5258 | - |
| 0.4941 | 6300 | 0.0826 | - |
| 0.4980 | 6350 | 0.0079 | - |
| 0.5020 | 6400 | 0.1817 | - |
| 0.5059 | 6450 | 0.0767 | - |
| 0.5098 | 6500 | 0.0221 | - |
| 0.5137 | 6550 | 0.0419 | - |
| 0.5176 | 6600 | 0.2452 | - |
| 0.5216 | 6650 | 0.0232 | - |
| 0.5255 | 6700 | 0.0804 | - |
| 0.5294 | 6750 | 0.1752 | - |
| 0.5333 | 6800 | 0.0127 | - |
| 0.5373 | 6850 | 0.0454 | - |
| 0.5412 | 6900 | 0.1759 | - |
| 0.5451 | 6950 | 0.0435 | - |
| 0.5490 | 7000 | 0.0109 | - |
| 0.5529 | 7050 | 0.0162 | - |
| 0.5569 | 7100 | 0.0133 | - |
| 0.5608 | 7150 | 0.2363 | - |
| 0.5647 | 7200 | 0.4987 | - |
| 0.5686 | 7250 | 0.1149 | - |
| 0.5725 | 7300 | 0.4613 | - |
| 0.5765 | 7350 | 0.3837 | - |
| 0.5804 | 7400 | 0.2439 | - |
| 0.5843 | 7450 | 0.0014 | - |
| 0.5882 | 7500 | 0.0177 | - |
| 0.5922 | 7550 | 0.0051 | - |
| 0.5961 | 7600 | 0.0418 | - |
| 0.6 | 7650 | 0.0061 | - |
| 0.6039 | 7700 | 0.2205 | - |
| 0.6078 | 7750 | 0.1769 | - |
| 0.6118 | 7800 | 0.0071 | - |
| 0.6157 | 7850 | 0.2271 | - |
| 0.6196 | 7900 | 0.3049 | - |
| 0.6235 | 7950 | 0.0016 | - |
| 0.6275 | 8000 | 0.2263 | - |
| 0.6314 | 8050 | 0.0057 | - |
| 0.6353 | 8100 | 0.1408 | - |
| 0.6392 | 8150 | 0.0303 | - |
| 0.6431 | 8200 | 0.0026 | - |
| 0.6471 | 8250 | 0.1743 | - |
| 0.6510 | 8300 | 0.2078 | - |
| 0.6549 | 8350 | 0.1764 | - |
| 0.6588 | 8400 | 0.0127 | - |
| 0.6627 | 8450 | 0.2435 | - |
| 0.6667 | 8500 | 0.0527 | - |
| 0.6706 | 8550 | 0.247 | - |
| 0.6745 | 8600 | 0.002 | - |
| 0.6784 | 8650 | 0.0087 | - |
| 0.6824 | 8700 | 0.1866 | - |
| 0.6863 | 8750 | 0.0087 | - |
| 0.6902 | 8800 | 0.1589 | - |
| 0.6941 | 8850 | 0.1848 | - |
| 0.6980 | 8900 | 0.0298 | - |
| 0.7020 | 8950 | 0.0081 | - |
| 0.7059 | 9000 | 0.3057 | - |
| 0.7098 | 9050 | 0.2059 | - |
| 0.7137 | 9100 | 0.2154 | - |
| 0.7176 | 9150 | 0.0013 | - |
| 0.7216 | 9200 | 0.1961 | - |
| 0.7255 | 9250 | 0.0129 | - |
| 0.7294 | 9300 | 0.0021 | - |
| 0.7333 | 9350 | 0.2106 | - |
| 0.7373 | 9400 | 0.0008 | - |
| 0.7412 | 9450 | 0.1261 | - |
| 0.7451 | 9500 | 0.1948 | - |
| 0.7490 | 9550 | 0.013 | - |
| 0.7529 | 9600 | 0.208 | - |
| 0.7569 | 9650 | 0.2382 | - |
| 0.7608 | 9700 | 0.0054 | - |
| 0.7647 | 9750 | 0.1869 | - |
| 0.7686 | 9800 | 0.0334 | - |
| 0.7725 | 9850 | 0.0197 | - |
| 0.7765 | 9900 | 0.0057 | - |
| 0.7804 | 9950 | 0.0056 | - |
| 0.7843 | 10000 | 0.0043 | - |
| 0.7882 | 10050 | 0.0025 | - |
| 0.7922 | 10100 | 0.6808 | - |
| 0.7961 | 10150 | 0.043 | - |
| 0.8 | 10200 | 0.0536 | - |
| 0.8039 | 10250 | 0.2435 | - |
| 0.8078 | 10300 | 0.0051 | - |
| 0.8118 | 10350 | 0.0653 | - |
| 0.8157 | 10400 | 0.017 | - |
| 0.8196 | 10450 | 0.0036 | - |
| 0.8235 | 10500 | 0.1561 | - |
| 0.8275 | 10550 | 0.001 | - |
| 0.8314 | 10600 | 0.1975 | - |
| 0.8353 | 10650 | 0.2378 | - |
| 0.8392 | 10700 | 0.1276 | - |
| 0.8431 | 10750 | 0.0719 | - |
| 0.8471 | 10800 | 0.1951 | - |
| 0.8510 | 10850 | 0.0446 | - |
| 0.8549 | 10900 | 0.2045 | - |
| 0.8588 | 10950 | 0.0598 | - |
| 0.8627 | 11000 | 0.0094 | - |
| 0.8667 | 11050 | 0.1117 | - |
| 0.8706 | 11100 | 0.0528 | - |
| 0.8745 | 11150 | 0.0047 | - |
| 0.8784 | 11200 | 0.1492 | - |
| 0.8824 | 11250 | 0.2204 | - |
| 0.8863 | 11300 | 0.0089 | - |
| 0.8902 | 11350 | 0.0709 | - |
| 0.8941 | 11400 | 0.1111 | - |
| 0.8980 | 11450 | 0.0048 | - |
| 0.9020 | 11500 | 0.0173 | - |
| 0.9059 | 11550 | 0.2862 | - |
| 0.9098 | 11600 | 0.2745 | - |
| 0.9137 | 11650 | 0.0054 | - |
| 0.9176 | 11700 | 0.0074 | - |
| 0.9216 | 11750 | 0.0036 | - |
| 0.9255 | 11800 | 0.0869 | - |
| 0.9294 | 11850 | 0.2333 | - |
| 0.9333 | 11900 | 0.15 | - |
| 0.9373 | 11950 | 0.066 | - |
| 0.9412 | 12000 | 0.1742 | - |
| 0.9451 | 12050 | 0.0009 | - |
| 0.9490 | 12100 | 0.1246 | - |
| 0.9529 | 12150 | 0.1674 | - |
| 0.9569 | 12200 | 0.1937 | - |
| 0.9608 | 12250 | 0.0724 | - |
| 0.9647 | 12300 | 0.0044 | - |
| 0.9686 | 12350 | 0.0013 | - |
| 0.9725 | 12400 | 0.0313 | - |
| 0.9765 | 12450 | 0.0925 | - |
| 0.9804 | 12500 | 0.1742 | - |
| 0.9843 | 12550 | 0.2294 | - |
| 0.9882 | 12600 | 0.1073 | - |
| 0.9922 | 12650 | 0.038 | - |
| 0.9961 | 12700 | 0.1866 | - |
| 1.0 | 12750 | 0.0141 | 0.2274 |
### Framework Versions
- Python: 3.10.13
- SetFit: 1.0.3
- Sentence Transformers: 3.0.1
- spaCy: 3.7.5
- Transformers: 4.36.2
- PyTorch: 2.1.2
- Datasets: 2.19.2
- Tokenizers: 0.15.2
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
futo-org/acft-whisper-small.en
|
futo-org
| 2024-06-25T20:09:11Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-06-25T20:08:21Z |
---
license: apache-2.0
---
# Model Card
## Model Description
This is in a fine-tuned series of [OpenAI's Whisper models](https://github.com/openai/whisper).
The models have been finetuned for dynamic audio context robustness, allowing shorter audio contexts for better performance with short audio inputs. The method is detailed [in our GitHub repo](https://github.com/futo-org/whisper-acft).
- **Developed by:** FUTO
- **License:** Apache-2.0
- **Finetuned from model:** OpenAI Whisper
## Uses
These models are not useful by themselves under default Whisper runtime configurations.
The easiest way to test differing audio context is to use whisper.cpp with the `--audio-context` parameter. We provide converted whisper.cpp models in our [GitHub README](https://github.com/futo-org/whisper-acft?tab=readme-ov-file#finetuning-whisper-for-dynamic-audio-context-robustness).
## Other Information
More information can be found in our [GitHub README](https://github.com/futo-org/whisper-acft?tab=readme-ov-file#finetuning-whisper-for-dynamic-audio-context-robustness).
|
readerbench/whisper-ro
|
readerbench
| 2024-06-25T20:08:26Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"whisper",
"automatic-speech-recognition",
"romanian",
"dataset:readerbench/echo",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-06-25T19:05:30Z |
---
license: apache-2.0
tags:
- automatic-speech-recognition
- whisper
- romanian
datasets:
- readerbench/echo
metrics:
- wer
model-index:
- name: whisper-ro
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Echo
type: readerbench/echo
config: ro
metrics:
- name: WER
type: wer
value: 0.08668345828147764
---
# whisper-ro
This model is a fine-tuned version of
[openai/whisper-small](https://huggingface.co/openai/whisper-small) on the [Echo
dataset](https://huggingface.co/datasets/readerbench/echo), a large open-source
Romanian dataset.
| Name | Small | Large-v2 | Fine-tuned small <br/><small>(this model)</small> |
|:------------:|:-----:|:--------:|:-------------------------------------------------:|
| Common Voice | 33.2 | 15.8 | 12.2 |
| FLEURS | 29.8 | 14.4 | 10.9 |
| VoxPopuli | 28.6 | 14.4 | 9.4 |
| Echo | >100 | >100 | 8.6 |
| RSC | 38.6 | 28.5 | 5.4 |
### Training hyperparameters
The following hyperparameters were used during training:
- `learning_rate`: 1e-05
- `train_batch_size`: 128
- `eval_batch_size`: 128
- `seed`: 42
- `distributed_type`: multi-GPU
- `num_devices`: 2
- `total_train_batch_size`: 256
- `total_eval_batch_size`: 256
- `optimizer`: Adam with betas=(0.9,0.999) and epsilon=1e-08
- `lr_scheduler_type`: linear
- `lr_scheduler_warmup_steps`: 500
- `num_epochs`: 20.0
- `mixed_precision_training`: Native AMP
|
futo-org/acft-whisper-small
|
futo-org
| 2024-06-25T20:08:06Z | 49 | 1 |
transformers
|
[
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-06-25T20:07:15Z |
---
license: apache-2.0
---
# Model Card
## Model Description
This is in a fine-tuned series of [OpenAI's Whisper models](https://github.com/openai/whisper).
The models have been finetuned for dynamic audio context robustness, allowing shorter audio contexts for better performance with short audio inputs. The method is detailed [in our GitHub repo](https://github.com/futo-org/whisper-acft).
- **Developed by:** FUTO
- **License:** Apache-2.0
- **Finetuned from model:** OpenAI Whisper
## Uses
These models are not useful by themselves under default Whisper runtime configurations.
The easiest way to test differing audio context is to use whisper.cpp with the `--audio-context` parameter. We provide converted whisper.cpp models in our [GitHub README](https://github.com/futo-org/whisper-acft?tab=readme-ov-file#finetuning-whisper-for-dynamic-audio-context-robustness).
## Other Information
More information can be found in our [GitHub README](https://github.com/futo-org/whisper-acft?tab=readme-ov-file#finetuning-whisper-for-dynamic-audio-context-robustness).
|
futo-org/acft-whisper-base
|
futo-org
| 2024-06-25T20:06:10Z | 19 | 1 |
transformers
|
[
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-06-25T20:05:51Z |
---
license: apache-2.0
---
# Model Card
## Model Description
This is in a fine-tuned series of [OpenAI's Whisper models](https://github.com/openai/whisper).
The models have been finetuned for dynamic audio context robustness, allowing shorter audio contexts for better performance with short audio inputs. The method is detailed [in our GitHub repo](https://github.com/futo-org/whisper-acft).
- **Developed by:** FUTO
- **License:** Apache-2.0
- **Finetuned from model:** OpenAI Whisper
## Uses
These models are not useful by themselves under default Whisper runtime configurations.
The easiest way to test differing audio context is to use whisper.cpp with the `--audio-context` parameter. We provide converted whisper.cpp models in our [GitHub README](https://github.com/futo-org/whisper-acft?tab=readme-ov-file#finetuning-whisper-for-dynamic-audio-context-robustness).
## Other Information
More information can be found in our [GitHub README](https://github.com/futo-org/whisper-acft?tab=readme-ov-file#finetuning-whisper-for-dynamic-audio-context-robustness).
|
braindao/iq-code-evmind-v3-granite-8b-instruct-average
|
braindao
| 2024-06-25T20:06:06Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"solidity",
"conversational",
"en",
"dataset:braindao/Solidity-Dataset",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-25T18:54:03Z |
---
license: apache-2.0
datasets:
- braindao/Solidity-Dataset
language:
- en
tags:
- solidity
---
The braindao/iq-code-evmind-v3-granite-8b-instruct-average is a specialized language model designed for generating Solidity code, the programming language used for writing smart contracts on the Ethereum blockchain. This model is trained using the braindao/Solidity-Dataset, specifically leveraging the “average” column to enhance its ability to produce high-quality, accurate Solidity code snippets and instructions. With its advanced 8-billion parameter architecture, it excels in understanding and generating contextually appropriate Solidity code, making it a valuable tool for developers and blockchain enthusiasts.
|
Niggendar/incursiosMemeDiffusion_v16PDXL
|
Niggendar
| 2024-06-25T20:05:27Z | 92 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2024-06-25T19:56:48Z |
---
library_name: diffusers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Ariffiq99/KUCI_COPA_xlm_roberta_large_finetuned
|
Ariffiq99
| 2024-06-25T20:03:18Z | 6 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"multiple-choice",
"generated_from_trainer",
"base_model:Ariffiq99/COPA_xlm_roberta_large_finetuned",
"base_model:finetune:Ariffiq99/COPA_xlm_roberta_large_finetuned",
"license:mit",
"endpoints_compatible",
"region:us"
] |
multiple-choice
| 2024-06-25T14:03:57Z |
---
license: mit
base_model: Ariffiq99/COPA_xlm_roberta_large_finetuned
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: KUCI_COPA_xlm_roberta_large_finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# KUCI_COPA_xlm_roberta_large_finetuned
This model is a fine-tuned version of [Ariffiq99/COPA_xlm_roberta_large_finetuned](https://huggingface.co/Ariffiq99/COPA_xlm_roberta_large_finetuned) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8340
- F1: 0.8481
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.4689 | 1.0 | 5196 | 0.4359 | 0.8377 |
| 0.3217 | 2.0 | 10392 | 0.4639 | 0.8439 |
| 0.2073 | 3.0 | 15588 | 0.5932 | 0.8475 |
| 0.1082 | 4.0 | 20784 | 0.8340 | 0.8481 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
subhavarshith/NEW_SROIE_DATAFORMAT_DONUT
|
subhavarshith
| 2024-06-25T19:52:07Z | 11 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:naver-clova-ix/donut-base",
"base_model:finetune:naver-clova-ix/donut-base",
"license:mit",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2024-06-25T18:49:32Z |
---
license: mit
base_model: naver-clova-ix/donut-base
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: NEW_SROIE_DATAFORMAT_DONUT
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# NEW_SROIE_DATAFORMAT_DONUT
This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 6
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
|
Niggendar/7thAnimeXLPonyA_v10
|
Niggendar
| 2024-06-25T19:48:37Z | 60 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2024-06-25T19:39:53Z |
---
library_name: diffusers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ugurcelebi/llama3-8b-instruct-tr-qlora
|
ugurcelebi
| 2024-06-25T19:41:07Z | 9 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-06-25T19:38:26Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Niggendar/vividpdxl_v10
|
Niggendar
| 2024-06-25T19:31:23Z | 78 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2024-06-25T19:21:06Z |
---
library_name: diffusers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
johnpaulbin/qwen0.5b-tokipona-2epochs
|
johnpaulbin
| 2024-06-25T19:22:15Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-25T19:21:00Z |
---
base_model: unsloth/qwen2-0.5b-instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- sft
---
# Uploaded model
- **Developed by:** johnpaulbin
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2-0.5b-instruct-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
FL33TW00D-HF/distil-whisper-large-v3
|
FL33TW00D-HF
| 2024-06-25T19:21:48Z | 493 | 3 |
transformers
|
[
"transformers",
"gguf",
"whisper",
"automatic-speech-recognition",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-03-21T15:50:44Z |
---
license: apache-2.0
---
# Model Card for Ratchet + Distil Whisper Large V3
<!-- Provide a quick summary of what the model is/does. -->
This is a conversion from the GGML format of [distil-whisper/distil-large-v3-ggml](https://huggingface.co/distil-whisper/distil-large-v3-ggml) into the Ratchet custom format.
## Model Card Contact
[[email protected]](mailto:[email protected])
|
NeuBiomLabs/AgroNeuLM-gemma-2b-it-16bit
|
NeuBiomLabs
| 2024-06-25T19:19:50Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/gemma-2b-it-bnb-4bit",
"base_model:finetune:unsloth/gemma-2b-it-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-25T19:15:44Z |
---
base_model: unsloth/gemma-2b-it-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- gemma
- trl
- sft
---
# Uploaded model
- **Developed by:** NeuBiomLabs
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-2b-it-bnb-4bit
This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
jartine/aya-23-8B-llamafile
|
jartine
| 2024-06-25T19:19:07Z | 449 | 5 | null |
[
"llamafile",
"en",
"fr",
"de",
"es",
"it",
"pt",
"ja",
"ko",
"zh",
"ar",
"el",
"fa",
"pl",
"id",
"cs",
"he",
"hi",
"nl",
"ro",
"ru",
"tr",
"uk",
"vi",
"license:other",
"region:us"
] | null | 2024-05-26T20:35:49Z |
---
license: other
inference: false
base_model: CohereForAI/aya-34-8B
license_link: LICENSE
quantized_by: jartine
prompt_template: |
<BOS_TOKEN>
<|START_OF_TURN_TOKEN|>
<|USER_TOKEN|>{{prompt}}<|END_OF_TURN_TOKEN|>
<|START_OF_TURN_TOKEN|>
<|CHATBOT_TOKEN|>
tags:
- llamafile
language:
- en
- fr
- de
- es
- it
- pt
- ja
- ko
- zh
- ar
- el
- fa
- pl
- id
- cs
- he
- hi
- nl
- ro
- ru
- tr
- uk
- vi
---
# aya-34-8B - llamafile
This repository contains executable weights (which we call
[llamafiles](https://github.com/Mozilla-Ocho/llamafile)) that run on
Linux, MacOS, Windows, FreeBSD, OpenBSD, and NetBSD for AMD64 and ARM64.
- Model creator: [CohereForAI](https://huggingface.co/CohereForAI)
- Original model: [CohereForAI/aya-23-8B](https://huggingface.co/CohereForAI/aya-23-8B)
This is multilingual model, with a focus on Arabic.
## Quickstart
You can run the following commands which download, concatenate, and
execute the model.
```
wget https://huggingface.co/jartine/aya-23-8B-llamafile/resolve/main/aya-23-8B.Q6_K.llamafile
chmod +x aya-23-8B.Q6_K.llamafile
./aya-23-8B.Q6_K.llamafile --help # view manual
./aya-23-8B.Q6_K.llamafile # launch web gui + oai api
./aya-23-8B.Q6_K.llamafile -p ... # cli interface (scriptable)
```
Alternatively, you may download an official `llamafile` executable from
Mozilla Ocho on GitHub, in which case you can use the Mixtral llamafiles
as a simple weights data file.
```
llamafile -m ./aya-23-8B.Q6_K.llamafile ...
```
For further information, please see the [llamafile
README](https://github.com/mozilla-ocho/llamafile/).
Having **trouble?** See the ["Gotchas"
section](https://github.com/mozilla-ocho/llamafile/?tab=readme-ov-file#gotchas)
of the README.
## Prompting
Command-line instruction example:
```
./aya-23-8B.Q6_K.llamafile --log-disable --silent-prompt -p '<BOS_TOKEN>
<|START_OF_TURN_TOKEN|>
<|USER_TOKEN|>Who is the president?<|END_OF_TURN_TOKEN|>
<|START_OF_TURN_TOKEN|>
<|CHATBOT_TOKEN|>'
```
"Prompt Template" (copy and paste this into the web GUI):
```
<BOS_TOKEN>
<|SYSTEM_TOKEN|>{{prompt}}<|END_OF_TURN_TOKEN|>
{{history}}
<|START_OF_TURN_TOKEN|>
<|CHATBOT_TOKEN|>
```
"Chat history template" (copy and paste this into the web GUI):
```
<|START_OF_TURN_TOKEN|>
<|USER_TOKEN|>{{message}}<|END_OF_TURN_TOKEN|>
```
The maximum context size of this model is 8192 tokens. These llamafiles
use a default context size of 512 tokens. Whenever you need the maximum
context size to be available with llamafile for any given model, you can
pass the `-c 0` flag. The temperature on these llamafiles is set to zero
by default, because it helps. This can be changed, e.g. `--temp 0.8`.
## License
The aya-34-8B license requires:
- You can't use these weights for commercial purposes
- You have to give Cohere credit if you share or fine tune it
- You can't use it for purposes they consider unacceptable, such as
spam, misinformation, etc. The license says they can change the
definition of acceptable use at will.
- The CC-BY-NC 4.0 stipulates no downstream restrictions, so you can't
tack on your own list of unacceptable uses too if you create and
distribute a fine-tuned version.
This special license only applies to the LLM weights (i.e. the .gguf
file inside .llamafile). The llamafile software itself is permissively
licensed, having only components licensed under terms like Apache 2.0,
MIT, BSD, ISC, zlib, etc.
## About llamafile
llamafile is a new format introduced by Mozilla Ocho on Nov 20th 2023.
It uses Cosmopolitan Libc to turn LLM weights into runnable llama.cpp
binaries that run on the stock installs of six OSes for both ARM64 and
AMD64.
In addition to being executables, llamafiles are also zip archives. Each
llamafile contains a GGUF file, which you can extract using the `unzip`
command. If you want to change or add files to your llamafiles, then the
`zipalign` command (distributed on the llamafile github) should be used
instead of the traditional `zip` command.
---
# Model Card for Aya-23-8B
**Try Aya 23**
You can try out Aya 23 (35B) before downloading the weights in our hosted Hugging Face Space [here](https://huggingface.co/spaces/CohereForAI/aya-23).
## Model Summary
Aya 23 is an open weights research release of an instruction fine-tuned model with highly advanced multilingual capabilities. Aya 23 focuses on pairing a highly performant pre-trained [Command family](https://huggingface.co/CohereForAI/c4ai-command-r-plus) of models with the recently released [Aya Collection](https://huggingface.co/datasets/CohereForAI/aya_collection). The result is a powerful multilingual large language model serving 23 languages.
This model card corresponds to the 8-billion version of the Aya 23 model. We also released a 35-billion version which you can find [here](https://huggingface.co/CohereForAI/aya-23-35B).
We cover 23 languages: Arabic, Chinese (simplified & traditional), Czech, Dutch, English, French, German, Greek, Hebrew, Hindi, Indonesian, Italian, Japanese, Korean, Persian, Polish, Portuguese, Romanian, Russian, Spanish, Turkish, Ukrainian, and Vietnamese
Developed by: [Cohere For AI](https://cohere.for.ai) and [Cohere](https://cohere.com/)
- Point of Contact: Cohere For AI: [cohere.for.ai](https://cohere.for.ai/)
- License: [CC-BY-NC](https://cohere.com/c4ai-cc-by-nc-license), requires also adhering to [C4AI's Acceptable Use Policy](https://docs.cohere.com/docs/c4ai-acceptable-use-policy)
- Model: aya-23-8B
- Model Size: 8 billion parameters
### Usage
Please install transformers from the source repository that includes the necessary changes for this model
```python
# pip install transformers==4.41.1
from transformers import AutoTokenizer, AutoModelForCausalLM
model_id = "CohereForAI/aya-23-8B"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
# Format message with the command-r-plus chat template
messages = [{"role": "user", "content": "Anneme onu ne kadar sevdiğimi anlatan bir mektup yaz"}]
input_ids = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt")
## <BOS_TOKEN><|START_OF_TURN_TOKEN|><|USER_TOKEN|>Anneme onu ne kadar sevdiğimi anlatan bir mektup yaz<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|>
gen_tokens = model.generate(
input_ids,
max_new_tokens=100,
do_sample=True,
temperature=0.3,
)
gen_text = tokenizer.decode(gen_tokens[0])
print(gen_text)
```
### Example Notebook
[This notebook](https://huggingface.co/CohereForAI/aya-23-8B/blob/main/Aya_23_notebook.ipynb) showcases a detailed use of Aya 23 (8B) including inference and fine-tuning with [QLoRA](https://huggingface.co/blog/4bit-transformers-bitsandbytes).
## Model Details
**Input**: Models input text only.
**Output**: Models generate text only.
**Model Architecture**: Aya-23-8B is an auto-regressive language model that uses an optimized transformer architecture. After pretraining, this model is fine-tuned (IFT) to follow human instructions.
**Languages covered**: The model is particularly optimized for multilinguality and supports the following languages: Arabic, Chinese (simplified & traditional), Czech, Dutch, English, French, German, Greek, Hebrew, Hindi, Indonesian, Italian, Japanese, Korean, Persian, Polish, Portuguese, Romanian, Russian, Spanish, Turkish, Ukrainian, and Vietnamese
**Context length**: 8192
### Evaluation
<img src="benchmarks.png" alt="multilingual benchmarks" width="650" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
<img src="winrates.png" alt="average win rates" width="650" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
Please refer to the [Aya 23 technical report](https://cohere.com/research/papers/aya-command-23-8b-and-35b-technical-report-2024-05-23) for further details about the base model, data, instruction tuning, and evaluation.
### Model Card Contact
For errors or additional questions about details in this model card, contact [email protected].
### Terms of Use
We hope that the release of this model will make community-based research efforts more accessible, by releasing the weights of a highly performant multilingual model to researchers all over the world. This model is governed by a [CC-BY-NC](https://cohere.com/c4ai-cc-by-nc-license) License with an acceptable use addendum, and also requires adhering to [C4AI's Acceptable Use Policy](https://docs.cohere.com/docs/c4ai-acceptable-use-policy).
### Try the model today
You can try Aya 23 in the Cohere [playground](https://dashboard.cohere.com/playground/chat) here. You can also use it in our dedicated Hugging Face Space [here](https://huggingface.co/spaces/CohereForAI/aya-23).
### Citation info
```bibtex
@misc{aya23technicalreport,
title={Aya 23: Open Weight Releases to Further Multilingual Progress},
author={Viraat Aryabumi, John Dang, Dwarak Talupuru, Saurabh Dash, David Cairuz, Hangyu Lin, Bharat Venkitesh, Madeline Smith, Kelly Marchisio, Sebastian Ruder, Acyr Locatelli, Julia Kreutzer, Nick Frosst, Phil Blunsom, Marzieh Fadaee, Ahmet Üstün, and Sara Hooker},
url={https://cohere.com/research/papers/aya-command-23-8b-and-35b-technical-report-2024-05-23},
year={2024}
}
```
|
Xu-Ouyang/pythia-6.9b-deduped-int4-step143000-GPTQ-wikitext2
|
Xu-Ouyang
| 2024-06-25T19:15:20Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"gptq",
"region:us"
] |
text-generation
| 2024-06-25T19:10:17Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
nadellaroshni/xlnet
|
nadellaroshni
| 2024-06-25T19:13:13Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"xlnet",
"text-classification",
"generated_from_trainer",
"base_model:xlnet/xlnet-base-cased",
"base_model:finetune:xlnet/xlnet-base-cased",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-06-25T14:21:01Z |
---
license: mit
base_model: xlnet/xlnet-base-cased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: xlnet
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlnet
This model is a fine-tuned version of [xlnet/xlnet-base-cased](https://huggingface.co/xlnet/xlnet-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3991
- Accuracy: 0.823
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.627 | 1.0 | 625 | 0.4767 | 0.773 |
| 0.4575 | 2.0 | 1250 | 0.3991 | 0.823 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cpu
- Datasets 2.20.0
- Tokenizers 0.19.1
|
TeeZee/2xbagel-dpo-34b-v0.2
|
TeeZee
| 2024-06-25T19:12:15Z | 1,394 | 2 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"conversational",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-11T01:48:24Z |
---
tags:
- merge
model-index:
- name: 2xbagel-dpo-34b-v0.2
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 65.27
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TeeZee/2xbagel-dpo-34b-v0.2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 79.35
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TeeZee/2xbagel-dpo-34b-v0.2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 73.64
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TeeZee/2xbagel-dpo-34b-v0.2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 67.15
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TeeZee/2xbagel-dpo-34b-v0.2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 76.4
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TeeZee/2xbagel-dpo-34b-v0.2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 2.12
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TeeZee/2xbagel-dpo-34b-v0.2
name: Open LLM Leaderboard
license: apache-2.0
---
# Bagel DPO 57B

## Model Details
- A result of interleaving layers of [jondurbin/bagel-dpo-34b-v0.2](https://huggingface.co/jondurbin/bagel-dpo-34b-v0.2) with itself.
- The resulting model has 100 layers and approximately 57 billion parameters.
- See [mergekit-config.yml](https://huggingface.co/TeeZee/2xbagel-dpo-34b-v0.2/blob/main/mergekit-config.yml) for details on the merge method used.
**Warning: This model can produce NSFW content!**
## Results
Bigger version of original, uncensored like oryginal. All comments are greatly appreciated, download, test and if you appreciate my work, consider buying me my fuel:
<a href="https://www.buymeacoffee.com/TeeZee" target="_blank"><img src="https://cdn.buymeacoffee.com/buttons/v2/default-yellow.png" alt="Buy Me A Coffee" style="height: 60px !important;width: 217px !important;" ></a>
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_TeeZee__2xbagel-dpo-34b-v0.2)
| Metric |Value|
|---------------------------------|----:|
|Avg. |60.66|
|AI2 Reasoning Challenge (25-Shot)|65.27|
|HellaSwag (10-Shot) |79.35|
|MMLU (5-Shot) |73.64|
|TruthfulQA (0-shot) |67.15|
|Winogrande (5-shot) |76.40|
|GSM8k (5-shot) | 2.12|
|
booksouls/bart-large-cnn
|
booksouls
| 2024-06-25T19:05:40Z | 9 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bart",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-06-12T22:39:20Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
HeinzFelipe2/Quintessence-Finetune_Trading
|
HeinzFelipe2
| 2024-06-25T19:02:40Z | 12 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-25T17:55:27Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Niggendar/vividpdxl_realVAE
|
Niggendar
| 2024-06-25T19:02:01Z | 71 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2024-06-25T18:53:31Z |
---
library_name: diffusers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Xu-Ouyang/pythia-6.9b-deduped-int4-step110000-GPTQ-wikitext2
|
Xu-Ouyang
| 2024-06-25T18:58:10Z | 9 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"gptq",
"region:us"
] |
text-generation
| 2024-06-25T07:37:28Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
gozderam/bert-fine-tuned-cola-pt-loop
|
gozderam
| 2024-06-25T18:57:36Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-06-25T18:55:38Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Xu-Ouyang/pythia-1.4b-deduped-int4-step110000-GPTQ-wikitext2
|
Xu-Ouyang
| 2024-06-25T18:52:47Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"gptq",
"region:us"
] |
text-generation
| 2024-06-25T06:14:40Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Xu-Ouyang/pythia-1.4b-deduped-int4-step71000-GPTQ-wikitext2
|
Xu-Ouyang
| 2024-06-25T18:48:25Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"gptq",
"region:us"
] |
text-generation
| 2024-06-25T05:38:37Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
hgissbkh/ALMA-13B-LoRA-CPO-xCOMET-QE-Mono-Choose-Mid-Reject-Mid
|
hgissbkh
| 2024-06-25T18:45:42Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-25T18:28:20Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
hgissbkh/ALMA-13B-LoRA-CPO-xCOMET-QE-Mono-Choose-Mid-Reject-Low
|
hgissbkh
| 2024-06-25T18:45:35Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-25T18:28:20Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
vdaita/diff-codellama-7b
|
vdaita
| 2024-06-25T18:45:32Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"base_model:unsloth/codellama-7b-bnb-4bit",
"base_model:finetune:unsloth/codellama-7b-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-25T18:09:19Z |
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
base_model: unsloth/codellama-7b-bnb-4bit
---
# Uploaded model
- **Developed by:** vdaita
- **License:** apache-2.0
- **Finetuned from model :** unsloth/codellama-7b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Pepu06/Din1.0
|
Pepu06
| 2024-06-25T18:39:14Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-25T18:38:19Z |
---
license: apache-2.0
---
|
sianbrumm/Ner_Productname
|
sianbrumm
| 2024-06-25T18:36:16Z | 16 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"base_model:dbmdz/bert-base-german-uncased",
"base_model:finetune:dbmdz/bert-base-german-uncased",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-05-07T16:23:39Z |
---
license: mit
base_model: dbmdz/bert-base-german-uncased
tags:
- generated_from_trainer
model-index:
- name: first_ner_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# first_ner_model
This model is a fine-tuned version of [dbmdz/bert-base-german-uncased](https://huggingface.co/dbmdz/bert-base-german-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6659
- Label Metrics: {'Alcohol_content': {'precision': 0.5172413793103449, 'recall': 0.7894736842105263, 'f1': 0.625, 'number': 19}, 'Brand': {'precision': 0.8753623188405797, 'recall': 0.9082706766917293, 'f1': 0.8915129151291513, 'number': 665}, 'Packaging.Quantity': {'precision': 0.6891891891891891, 'recall': 0.796875, 'f1': 0.7391304347826088, 'number': 64}, 'Packaging.Type': {'precision': 0.6, 'recall': 0.3, 'f1': 0.4, 'number': 20}, 'Packaging.Volume': {'precision': 0.978494623655914, 'recall': 0.7054263565891473, 'f1': 0.8198198198198199, 'number': 129}, 'Packaging.Weight': {'precision': 0.8891402714932126, 'recall': 0.9974619289340102, 'f1': 0.9401913875598087, 'number': 394}, 'Producttype': {'precision': 0.4715984147952444, 'recall': 0.5344311377245509, 'f1': 0.5010526315789474, 'number': 668}, '_': {'precision': 0.367983367983368, 'recall': 0.42857142857142855, 'f1': 0.3959731543624161, 'number': 413}, 'overall_precision': 0.657608695652174, 'overall_recall': 0.7141652613827993, 'overall_f1': 0.6847210994341149, 'overall_accuracy': 0.7419354838709677}
- Overall Metrics: {'overall_precision': 0.657608695652174, 'overall_recall': 0.7141652613827993, 'overall_f1': 0.6847210994341149, 'overall_accuracy': 0.7419354838709677}
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Label Metrics | Overall Metrics |
|:-------------:|:-----:|:----:|:---------------:|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------------------------------------------------------:|
| No log | 1.0 | 175 | 0.6684 | {'Alcohol_content': {'precision': 0.46153846153846156, 'recall': 0.3157894736842105, 'f1': 0.37499999999999994, 'number': 19}, 'Brand': {'precision': 0.8885672937771346, 'recall': 0.9233082706766917, 'f1': 0.9056047197640117, 'number': 665}, 'Packaging.Quantity': {'precision': 0.7272727272727273, 'recall': 0.75, 'f1': 0.7384615384615384, 'number': 64}, 'Packaging.Type': {'precision': 0.6666666666666666, 'recall': 0.1, 'f1': 0.1739130434782609, 'number': 20}, 'Packaging.Volume': {'precision': 0.9782608695652174, 'recall': 0.6976744186046512, 'f1': 0.8144796380090498, 'number': 129}, 'Packaging.Weight': {'precision': 0.8828828828828829, 'recall': 0.9949238578680203, 'f1': 0.9355608591885441, 'number': 394}, 'Producttype': {'precision': 0.43159203980099503, 'recall': 0.5194610778443114, 'f1': 0.4714673913043478, 'number': 668}, '_': {'precision': 0.3300970873786408, 'recall': 0.4116222760290557, 'f1': 0.3663793103448276, 'number': 413}, 'overall_precision': 0.6350837138508372, 'overall_recall': 0.7036256323777403, 'overall_f1': 0.6676, 'overall_accuracy': 0.7299687825182102} | {'overall_precision': 0.6350837138508372, 'overall_recall': 0.7036256323777403, 'overall_f1': 0.6676, 'overall_accuracy': 0.7299687825182102} |
| No log | 2.0 | 350 | 0.6659 | {'Alcohol_content': {'precision': 0.5172413793103449, 'recall': 0.7894736842105263, 'f1': 0.625, 'number': 19}, 'Brand': {'precision': 0.8753623188405797, 'recall': 0.9082706766917293, 'f1': 0.8915129151291513, 'number': 665}, 'Packaging.Quantity': {'precision': 0.6891891891891891, 'recall': 0.796875, 'f1': 0.7391304347826088, 'number': 64}, 'Packaging.Type': {'precision': 0.6, 'recall': 0.3, 'f1': 0.4, 'number': 20}, 'Packaging.Volume': {'precision': 0.978494623655914, 'recall': 0.7054263565891473, 'f1': 0.8198198198198199, 'number': 129}, 'Packaging.Weight': {'precision': 0.8891402714932126, 'recall': 0.9974619289340102, 'f1': 0.9401913875598087, 'number': 394}, 'Producttype': {'precision': 0.4715984147952444, 'recall': 0.5344311377245509, 'f1': 0.5010526315789474, 'number': 668}, '_': {'precision': 0.367983367983368, 'recall': 0.42857142857142855, 'f1': 0.3959731543624161, 'number': 413}, 'overall_precision': 0.657608695652174, 'overall_recall': 0.7141652613827993, 'overall_f1': 0.6847210994341149, 'overall_accuracy': 0.7419354838709677} | {'overall_precision': 0.657608695652174, 'overall_recall': 0.7141652613827993, 'overall_f1': 0.6847210994341149, 'overall_accuracy': 0.7419354838709677} |
| 0.75 | 3.0 | 525 | 0.6749 | {'Alcohol_content': {'precision': 0.6363636363636364, 'recall': 0.7368421052631579, 'f1': 0.6829268292682926, 'number': 19}, 'Brand': {'precision': 0.9033674963396779, 'recall': 0.9278195488721804, 'f1': 0.9154302670623146, 'number': 665}, 'Packaging.Quantity': {'precision': 0.6619718309859155, 'recall': 0.734375, 'f1': 0.6962962962962963, 'number': 64}, 'Packaging.Type': {'precision': 0.5384615384615384, 'recall': 0.35, 'f1': 0.4242424242424242, 'number': 20}, 'Packaging.Volume': {'precision': 0.9787234042553191, 'recall': 0.7131782945736435, 'f1': 0.8251121076233184, 'number': 129}, 'Packaging.Weight': {'precision': 0.8888888888888888, 'recall': 0.9949238578680203, 'f1': 0.9389221556886227, 'number': 394}, 'Producttype': {'precision': 0.5122615803814714, 'recall': 0.562874251497006, 'f1': 0.536376604850214, 'number': 668}, '_': {'precision': 0.4088235294117647, 'recall': 0.3365617433414044, 'f1': 0.36918990703851257, 'number': 413}, 'overall_precision': 0.7022518765638032, 'overall_recall': 0.7099494097807757, 'overall_f1': 0.7060796645702306, 'overall_accuracy': 0.7419354838709677} | {'overall_precision': 0.7022518765638032, 'overall_recall': 0.7099494097807757, 'overall_f1': 0.7060796645702306, 'overall_accuracy': 0.7419354838709677} |
| 0.75 | 4.0 | 700 | 0.6891 | {'Alcohol_content': {'precision': 0.7142857142857143, 'recall': 0.7894736842105263, 'f1': 0.7500000000000001, 'number': 19}, 'Brand': {'precision': 0.9104258443465492, 'recall': 0.9323308270676691, 'f1': 0.9212481426448736, 'number': 665}, 'Packaging.Quantity': {'precision': 0.6712328767123288, 'recall': 0.765625, 'f1': 0.7153284671532847, 'number': 64}, 'Packaging.Type': {'precision': 0.42105263157894735, 'recall': 0.4, 'f1': 0.41025641025641024, 'number': 20}, 'Packaging.Volume': {'precision': 0.9787234042553191, 'recall': 0.7131782945736435, 'f1': 0.8251121076233184, 'number': 129}, 'Packaging.Weight': {'precision': 0.8888888888888888, 'recall': 0.9949238578680203, 'f1': 0.9389221556886227, 'number': 394}, 'Producttype': {'precision': 0.505464480874317, 'recall': 0.5538922155688623, 'f1': 0.5285714285714285, 'number': 668}, '_': {'precision': 0.4217506631299735, 'recall': 0.38498789346246975, 'f1': 0.40253164556962034, 'number': 413}, 'overall_precision': 0.699343724364233, 'overall_recall': 0.7188026981450253, 'overall_f1': 0.7089397089397088, 'overall_accuracy': 0.7447970863683663} | {'overall_precision': 0.699343724364233, 'overall_recall': 0.7188026981450253, 'overall_f1': 0.7089397089397088, 'overall_accuracy': 0.7447970863683663} |
| 0.75 | 5.0 | 875 | 0.7116 | {'Alcohol_content': {'precision': 0.7142857142857143, 'recall': 0.7894736842105263, 'f1': 0.7500000000000001, 'number': 19}, 'Brand': {'precision': 0.9092240117130308, 'recall': 0.9338345864661655, 'f1': 0.9213649851632048, 'number': 665}, 'Packaging.Quantity': {'precision': 0.7, 'recall': 0.765625, 'f1': 0.7313432835820896, 'number': 64}, 'Packaging.Type': {'precision': 0.5, 'recall': 0.55, 'f1': 0.5238095238095238, 'number': 20}, 'Packaging.Volume': {'precision': 0.968421052631579, 'recall': 0.7131782945736435, 'f1': 0.8214285714285715, 'number': 129}, 'Packaging.Weight': {'precision': 0.8886363636363637, 'recall': 0.9923857868020305, 'f1': 0.9376498800959233, 'number': 394}, 'Producttype': {'precision': 0.5089655172413793, 'recall': 0.5523952095808383, 'f1': 0.529791816223977, 'number': 668}, '_': {'precision': 0.4172661870503597, 'recall': 0.4213075060532688, 'f1': 0.419277108433735, 'number': 413}, 'overall_precision': 0.6963202587949858, 'overall_recall': 0.7259696458684655, 'overall_f1': 0.7108359133126935, 'overall_accuracy': 0.7484391259105099} | {'overall_precision': 0.6963202587949858, 'overall_recall': 0.7259696458684655, 'overall_f1': 0.7108359133126935, 'overall_accuracy': 0.7484391259105099} |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.0.1+cu117
- Datasets 2.20.0
- Tokenizers 0.19.1
|
gozderam/bert-fine-tuned-cola
|
gozderam
| 2024-06-25T18:35:54Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-06-25T18:34:28Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
tuskbyte/yes_no_model_english
|
tuskbyte
| 2024-06-25T18:35:08Z | 6 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-classification",
"generated_from_trainer",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-06-25T18:07:16Z |
---
license: apache-2.0
tags:
- generated_from_trainer
base_model: gpt3.5
model-index:
- name: yes_no_model_english
results: []
language:
- en
---
-----
label_map = {'True': 0, 'False': 1, 'Invalid input': 2}
------
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# yes_no_model_english
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0002
## Model description
More information needed
## Intended uses & limitations
```from transformers import AutoModelForSequenceClassification, AutoTokenizer, Trainer, TrainingArguments
from transformers import GPT2Tokenizer, GPT2ForSequenceClassification, Trainer, TrainingArguments
# Replace 'your-username/your-model-name' with the actual model identifier
model_id = 'tuskbyte/yes_no_model_english'
label_map=["Yes","NO","Invalid Input"]
# label_map = {'True': 0, 'False': 1, 'Invalid input': 2}
# Load the model
model = AutoModelForSequenceClassification.from_pretrained(model_id)
try:
# Try to load the tokenizer
tokenizer = AutoTokenizer.from_pretrained(model_id)
except OSError:
# Fallback to a default tokenizer if loading fails
print(f"Tokenizer for '{model_id}' not found. Using gpt as fallback.")
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
# Initialize Trainer with dummy arguments for inference
training_args = TrainingArguments(
output_dir='./results', # specify your output directory
per_device_eval_batch_size=1 # batch size for inference
)
trainer = Trainer(
model=model,
args=training_args,
tokenizer=tokenizer
)
# Example input
question = "Would you like to paticipate ?"
answer = "yes i would"
input_text = f"{question} {answer}"
# Tokenize the input
inputs = tokenizer(input_text, return_tensors="pt")
model.to('cuda')
inputs.to('cuda')
# Perform inference using the model
outputs = model(**inputs)
logits = outputs.logits
# Get the predicted label
predicted_class_id = logits.argmax().item()
print("predicted_class_id",predicted_class_id)
labels = model.config.id2label
print("labels",labels)
predicted_label = labels[predicted_class_id]
# Output the result
print(f"Predicted label: {predicted_label}")
print(f"Model predection is : {label_map[predicted_class_id]}")
```
```
support english only
```
## Training procedure
```
upcomming soon
```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.2072 | 0.2857 | 10 | 1.0470 |
| 1.0909 | 0.5714 | 20 | 0.7972 |
| 0.8701 | 0.8571 | 30 | 0.5695 |
| 0.5525 | 1.1429 | 40 | 0.2802 |
| 0.2131 | 1.4286 | 50 | 0.0569 |
| 0.0454 | 1.7143 | 60 | 0.0093 |
| 0.0144 | 2.0 | 70 | 0.0012 |
| 0.0016 | 2.2857 | 80 | 0.0003 |
| 0.0006 | 2.5714 | 90 | 0.0002 |
| 0.0006 | 2.8571 | 100 | 0.0002 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.1.2
- Datasets 2.19.2
- Tokenizers 0.19.1
|
vuminhtue/Bert_base_NER_Merge4
|
vuminhtue
| 2024-06-25T18:30:27Z | 10 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-06-25T18:30:12Z |
---
license: apache-2.0
base_model: google-bert/bert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: Bert_base_NER_Merge4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/tuevu_smu/huggingface/runs/tz6lqk7l)
# Bert_base_NER_Merge4
This model is a fine-tuned version of [google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.41.0.dev0
- Pytorch 1.13.1
- Datasets 2.18.0
- Tokenizers 0.19.1
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.