modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-29 00:46:34
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 502
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-29 00:44:25
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
mistral-community/Mistral-7B-Instruct-v0.3 | mistral-community | 2024-07-01T08:52:29Z | 814 | 3 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-26T15:46:26Z | ---
license: apache-2.0
---
# Model Card for Mistral-7B-Instruct-v0.3
> [!WARNING]
> This model checkpoint is provided as-is and might not be up-to-date. Please use the corresponding version from https://huggingface.co/mistralai org
The Mistral-7B-Instruct-v0.3 Large Language Model (LLM) is an instruct fine-tuned version of the Mistral-7B-v0.3.
Mistral-7B-v0.3 has the following changes compared to [Mistral-7B-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2/edit/main/README.md)
- Extended vocabulary to 32768
- Supports v3 Tokenizer
- Supports function calling
## Installation
It is recommended to use `mistralai/Mistral-7B-Instruct-v0.3` with [mistral-inference](https://github.com/mistralai/mistral-inference). For HF transformers code snippets, please keep scrolling.
```
pip install mistral_inference
```
## Download
```py
from huggingface_hub import snapshot_download
from pathlib import Path
mistral_models_path = Path.home().joinpath('mistral_models', '7B-Instruct-v0.3')
mistral_models_path.mkdir(parents=True, exist_ok=True)
snapshot_download(repo_id="mistralai/Mistral-7B-Instruct-v0.3", allow_patterns=["params.json", "consolidated.safetensors", "tokenizer.model.v3"], local_dir=mistral_models_path)
```
### Chat
After installing `mistral_inference`, a `mistral-chat` CLI command should be available in your environment. You can chat with the model using
```
mistral-chat $HOME/mistral_models/7B-Instruct-v0.3 --instruct --max_tokens 256
```
### Instruct following
```py
from mistral_inference.model import Transformer
from mistral_inference.generate import generate
from mistral_common.tokens.tokenizers.mistral import MistralTokenizer
from mistral_common.protocol.instruct.messages import UserMessage
from mistral_common.protocol.instruct.request import ChatCompletionRequest
tokenizer = MistralTokenizer.from_file(f"{mistral_models_path}/tokenizer.model.v3")
model = Transformer.from_folder(mistral_models_path)
completion_request = ChatCompletionRequest(messages=[UserMessage(content="Explain Machine Learning to me in a nutshell.")])
tokens = tokenizer.encode_chat_completion(completion_request).tokens
out_tokens, _ = generate([tokens], model, max_tokens=64, temperature=0.0, eos_id=tokenizer.instruct_tokenizer.tokenizer.eos_id)
result = tokenizer.instruct_tokenizer.tokenizer.decode(out_tokens[0])
print(result)
```
### Function calling
```py
from mistral_common.protocol.instruct.tool_calls import Function, Tool
from mistral_inference.model import Transformer
from mistral_inference.generate import generate
from mistral_common.tokens.tokenizers.mistral import MistralTokenizer
from mistral_common.protocol.instruct.messages import UserMessage
from mistral_common.protocol.instruct.request import ChatCompletionRequest
tokenizer = MistralTokenizer.from_file(f"{mistral_models_path}/tokenizer.model.v3")
model = Transformer.from_folder(mistral_models_path)
completion_request = ChatCompletionRequest(
tools=[
Tool(
function=Function(
name="get_current_weather",
description="Get the current weather",
parameters={
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA",
},
"format": {
"type": "string",
"enum": ["celsius", "fahrenheit"],
"description": "The temperature unit to use. Infer this from the users location.",
},
},
"required": ["location", "format"],
},
)
)
],
messages=[
UserMessage(content="What's the weather like today in Paris?"),
],
)
tokens = tokenizer.encode_chat_completion(completion_request).tokens
out_tokens, _ = generate([tokens], model, max_tokens=64, temperature=0.0, eos_id=tokenizer.instruct_tokenizer.tokenizer.eos_id)
result = tokenizer.instruct_tokenizer.tokenizer.decode(out_tokens[0])
print(result)
```
## Generate with `transformers`
If you want to use Hugging Face `transformers` to generate text, you can do something like this.
```py
from transformers import pipeline
messages = [
{"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
{"role": "user", "content": "Who are you?"},
]
chatbot = pipeline("text-generation", model="mistralai/Mistral-7B-Instruct-v0.3")
chatbot(messages)
```
## Limitations
The Mistral 7B Instruct model is a quick demonstration that the base model can be easily fine-tuned to achieve compelling performance.
It does not have any moderation mechanisms. We're looking forward to engaging with the community on ways to
make the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs.
## The Mistral AI Team
Albert Jiang, Alexandre Sablayrolles, Alexis Tacnet, Antoine Roux, Arthur Mensch, Audrey Herblin-Stoop, Baptiste Bout, Baudouin de Monicault, Blanche Savary, Bam4d, Caroline Feldman, Devendra Singh Chaplot, Diego de las Casas, Eleonore Arcelin, Emma Bou Hanna, Etienne Metzger, Gianna Lengyel, Guillaume Bour, Guillaume Lample, Harizo Rajaona, Jean-Malo Delignon, Jia Li, Justus Murke, Louis Martin, Louis Ternon, Lucile Saulnier, Lélio Renard Lavaud, Margaret Jennings, Marie Pellat, Marie Torelli, Marie-Anne Lachaux, Nicolas Schuhl, Patrick von Platen, Pierre Stock, Sandeep Subramanian, Sophia Yang, Szymon Antoniak, Teven Le Scao, Thibaut Lavril, Timothée Lacroix, Théophile Gervet, Thomas Wang, Valera Nemychnikova, William El Sayed, William Marshall |
mistral-community/Codestral-22B-v0.1 | mistral-community | 2024-07-01T08:51:52Z | 269 | 17 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"code",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-05-29T20:52:18Z | ---
inference: false
license: other
license_name: mnpl
license_link: https://mistral.ai/licences/MNPL-0.1.md
tags:
- code
language:
- code
---
> [!WARNING]
> This model checkpoint is provided as-is and might not be up-to-date. Please use the corresponding version from https://huggingface.co/mistralai org
# Model Card for Codestral-22B-v0.1
Codestrall-22B-v0.1 is trained on a diverse dataset of 80+ programming languages, including the most popular ones, such as Python, Java, C, C++, JavaScript, and Bash (more details in the [Blogpost](https://mistral.ai/news/codestral/)). The model can be queried:
- As instruct, for instance to answer any questions about a code snippet (write documentation, explain, factorize) or to generate code following specific indications
- As Fill in the Middle (FIM), to predict the middle tokens between a prefix and a suffix (very useful for software development add-ons like in VS Code)
## Inference
It's the same as Mistral 7B.
## Limitations
The Codestral-22B-v0.1 does not have any moderation mechanisms. We're looking forward to engaging with the community on ways to
make the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs.
## License
Codestral-22B-v0.1 is released under the `MNLP-0.1` license.
## The Mistral AI Team
Albert Jiang, Alexandre Sablayrolles, Alexis Tacnet, Antoine Roux, Arthur Mensch, Audrey Herblin-Stoop, Baptiste Bout, Baudouin de Monicault, Blanche Savary, Bam4d, Caroline Feldman, Devendra Singh Chaplot, Diego de las Casas, Eleonore Arcelin, Emma Bou Hanna, Etienne Metzger, Gianna Lengyel, Guillaume Bour, Guillaume Lample, Harizo Rajaona, Henri Roussez, Jean-Malo Delignon, Jia Li, Justus Murke, Kartik Khandelwal, Lawrence Stewart, Louis Martin, Louis Ternon, Lucile Saulnier, Lélio Renard Lavaud, Margaret Jennings, Marie Pellat, Marie Torelli, Marie-Anne Lachaux, Marjorie Janiewicz, Mickael Seznec, Nicolas Schuhl, Patrick von Platen, Romain Sauvestre, Pierre Stock, Sandeep Subramanian, Saurabh Garg, Sophia Yang, Szymon Antoniak, Teven Le Scao, Thibaut Lavril, Thibault Schueller, Timothée Lacroix, Théophile Gervet, Thomas Wang, Valera Nemychnikova, Wendy Shang, William El Sayed, William Marshall |
Niggendar/waiC_v30 | Niggendar | 2024-07-01T08:51:17Z | 63 | 0 | diffusers | [
"diffusers",
"safetensors",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | 2024-07-01T08:40:59Z | ---
library_name: diffusers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
moris12345/falcon-moris-3 | moris12345 | 2024-07-01T08:49:43Z | 4 | 1 | transformers | [
"transformers",
"safetensors",
"falcon",
"text-generation",
"trl",
"sft",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-07-01T08:45:37Z | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
johnwee1/starcoder-1b-python | johnwee1 | 2024-07-01T08:47:55Z | 10 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_bigcode",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-07-01T08:46:51Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
kodoqmc/XTTS-v2_San-Ti | kodoqmc | 2024-07-01T08:46:36Z | 14 | 5 | coqui | [
"coqui",
"text-to-speech",
"license:other",
"region:us"
] | text-to-speech | 2024-07-01T07:43:31Z | ---
license: other
license_name: coqui-public-model-license
license_link: https://coqui.ai/cpml
library_name: coqui
pipeline_tag: text-to-speech
widget:
- text: "Once when I was six years old I saw a magnificent picture"
---
# ⓍTTS_v2 - The San-Ti Fine-Tuned Model
This repository hosts a fine-tuned version of the ⓍTTS model, utilizing 4 minutes of unique voice lines from The San-Ti, The voice lines were sourced from the clip of 3 Body Problem on Youtube, can be found here:
[The San-Ti Explain how they Stop Science on Earth | 3 Body Problem | Netflix](https://www.youtube.com/watch?v=caxiX38DK68)

Just the illustration, we never know their looks.
Listen to a sample of the ⓍTTS_v2 - The San-Ti Fine-Tuned Model:
<audio controls>
<source src="https://huggingface.co/kodoqmc/XTTS-v2_San-Ti/resolve/main/generatedTTS.wav" type="audio/wav">
Your browser does not support the audio element.
</audio>
Here's a The San-Ti mp3 voice line clip from the training data:
<audio controls>
<source src="https://huggingface.co/kodoqmc/XTTS-v2_San-Ti/resolve/main/reference.wav" type="audio/wav">
Your browser does not support the audio element.
</audio>
## Features
- 🎙️ **Voice Cloning**: Realistic voice cloning with just a short audio clip.
- 🌍 **Multi-Lingual Support**: Generates speech in 17 different languages while maintaining The San-Ti's voice.
- 😃 **Emotion & Style Transfer**: Captures the emotional tone and style of the original voice.
- 🔄 **Cross-Language Cloning**: Maintains the unique voice characteristics across different languages.
- 🎧 **High-Quality Audio**: Outputs at a 24kHz sampling rate for clear and high-fidelity audio.
## Supported Languages
The model supports the following 17 languages: English (en), Spanish (es), French (fr), German (de), Italian (it), Portuguese (pt), Polish (pl), Turkish (tr), Russian (ru), Dutch (nl), Czech (cs), Arabic (ar), Chinese (zh-cn), Japanese (ja), Hungarian (hu), Korean (ko), and Hindi (hi).
## Usage in Roll Cage
🤖💬 Boost your AI experience with this Ollama add-on! Enjoy real-time audio 🎙️ and text 🔍 chats, LaTeX rendering 📜, agent automations ⚙️, workflows 🔄, text-to-image 📝➡️🖼️, image-to-text 🖼️➡️🔤, image-to-video 🖼️➡️🎥 transformations. Fine-tune text 📝, voice 🗣️, and image 🖼️ gens. Includes Windows macro controls 🖥️ and DuckDuckGo search.
[ollama_agent_roll_cage (OARC)](https://github.com/Leoleojames1/ollama_agent_roll_cage) is a completely local Python & CMD toolset add-on for the Ollama command line interface. The OARC toolset automates the creation of agents, giving the user more control over the likely output. It provides SYSTEM prompt templates for each ./Modelfile, allowing users to design and deploy custom agents quickly. Users can select which local model file is used in agent construction with the desired system prompt.
## CoquiTTS and Resources
- 🐸💬 **CoquiTTS**: [Coqui TTS on GitHub](https://github.com/coqui-ai/TTS)
- 📚 **Documentation**: [ReadTheDocs](https://tts.readthedocs.io/en/latest/)
- 👩💻 **Questions**: [GitHub Discussions](https://github.com/coqui-ai/TTS/discussions)
- 🗯 **Community**: [Discord](https://discord.gg/5eXr5seRrv)
## License
This model is licensed under the [Coqui Public Model License](https://coqui.ai/cpml). Read more about the origin story of CPML [here](https://coqui.ai/blog/tts/cpml).
## Contact
Join our 🐸Community on [Discord](https://discord.gg/fBC58unbKE) and follow us on [Twitter](https://twitter.com/coqui_ai). For inquiries, email us at [email protected].
Using 🐸TTS API:
```python
from TTS.api import TTS
tts = TTS(model_path="D:/AI/ollama_agent_roll_cage/AgentFiles/Ignored_TTS/XTTS-v2_PeterDrury/",
config_path="D:/AI/ollama_agent_roll_cage/AgentFiles/Ignored_TTS/XTTS-v2_PeterDrury/config.json", progress_bar=False, gpu=True).to(self.device)
# generate speech by cloning a voice using default settings
tts.tts_to_file(text="It took me quite a long time to develop a voice, and now that I have it I'm not going to be silent.",
file_path="output.wav",
speaker_wav="/path/to/target/speaker.wav",
language="en")
```
Using 🐸TTS Command line:
```console
tts --model_name tts_models/multilingual/multi-dataset/xtts_v2 \
--text "Bugün okula gitmek istemiyorum." \
--speaker_wav /path/to/target/speaker.wav \
--language_idx tr \
--use_cuda true
```
Using the model directly:
```python
from TTS.tts.configs.xtts_config import XttsConfig
from TTS.tts.models.xtts import Xtts
config = XttsConfig()
config.load_json("/path/to/xtts/config.json")
model = Xtts.init_from_config(config)
model.load_checkpoint(config, checkpoint_dir="/path/to/xtts/", eval=True)
model.cuda()
outputs = model.synthesize(
"It took me quite a long time to develop a voice and now that I have it I am not going to be silent.",
config,
speaker_wav="/data/TTS-public/_refclips/3.wav",
gpt_cond_len=3,
language="en",
)
```
|
AtakanTekparmak/gemma-2-9b-GGUF | AtakanTekparmak | 2024-07-01T08:42:50Z | 24 | 1 | transformers | [
"transformers",
"gguf",
"conversational",
"text-generation",
"license:gemma",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-06-28T11:52:19Z | ---
license: gemma
library_name: transformers
pipeline_tag: text-generation
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: >-
To access Gemma on Hugging Face, you’re required to review and agree to
Google’s usage license. To do this, please ensure you’re logged in to Hugging
Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
tags:
- conversational
quantized_by: AtakanTekparmak
---
## gemma-2-9b GGUF
Llama.cpp version <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b3259">b3259</a> was used for hf to gguf conversion.
Original model: https://huggingface.co/google/gemma-2-9b
Available Precisions:
- f16
- q8_0
## License
[Gemma Terms of Use](https://www.kaggle.com/models/google/gemma/license/consent?verifyToken=CfDJ8GYiNaMVVSVCnegdIdgHCPNs7G2XBpoXihxv2r9_tiHG8yBpLIS5MxFnNQ7_383D4mLGj6dON1dNpVka6uBRl4CI4AnANO_EC7WBtsHqcwNRa-74ScR_Z7jdnObpRuRcOZTUEXiGDu3Fcf1YiAWUEpSFio6htkpio-0Iye9evEIaGPnPLEpSmyFzShl8pk_IYbVZ3yfaX-3eM7bzy4HAuw) applies the same as the original model.
|
d4niel92/leagaleasy-llama-3-instruct-v2 | d4niel92 | 2024-07-01T08:34:41Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-07-01T08:29:10Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
RichardErkhov/Q-bert_-_Optimus-7B-gguf | RichardErkhov | 2024-07-01T08:34:33Z | 12 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-07-01T06:25:48Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Optimus-7B - GGUF
- Model creator: https://huggingface.co/Q-bert/
- Original model: https://huggingface.co/Q-bert/Optimus-7B/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Optimus-7B.Q2_K.gguf](https://huggingface.co/RichardErkhov/Q-bert_-_Optimus-7B-gguf/blob/main/Optimus-7B.Q2_K.gguf) | Q2_K | 2.53GB |
| [Optimus-7B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Q-bert_-_Optimus-7B-gguf/blob/main/Optimus-7B.IQ3_XS.gguf) | IQ3_XS | 2.81GB |
| [Optimus-7B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Q-bert_-_Optimus-7B-gguf/blob/main/Optimus-7B.IQ3_S.gguf) | IQ3_S | 2.96GB |
| [Optimus-7B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Q-bert_-_Optimus-7B-gguf/blob/main/Optimus-7B.Q3_K_S.gguf) | Q3_K_S | 2.95GB |
| [Optimus-7B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Q-bert_-_Optimus-7B-gguf/blob/main/Optimus-7B.IQ3_M.gguf) | IQ3_M | 3.06GB |
| [Optimus-7B.Q3_K.gguf](https://huggingface.co/RichardErkhov/Q-bert_-_Optimus-7B-gguf/blob/main/Optimus-7B.Q3_K.gguf) | Q3_K | 3.28GB |
| [Optimus-7B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Q-bert_-_Optimus-7B-gguf/blob/main/Optimus-7B.Q3_K_M.gguf) | Q3_K_M | 3.28GB |
| [Optimus-7B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Q-bert_-_Optimus-7B-gguf/blob/main/Optimus-7B.Q3_K_L.gguf) | Q3_K_L | 3.56GB |
| [Optimus-7B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Q-bert_-_Optimus-7B-gguf/blob/main/Optimus-7B.IQ4_XS.gguf) | IQ4_XS | 3.67GB |
| [Optimus-7B.Q4_0.gguf](https://huggingface.co/RichardErkhov/Q-bert_-_Optimus-7B-gguf/blob/main/Optimus-7B.Q4_0.gguf) | Q4_0 | 3.83GB |
| [Optimus-7B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Q-bert_-_Optimus-7B-gguf/blob/main/Optimus-7B.IQ4_NL.gguf) | IQ4_NL | 3.87GB |
| [Optimus-7B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Q-bert_-_Optimus-7B-gguf/blob/main/Optimus-7B.Q4_K_S.gguf) | Q4_K_S | 3.86GB |
| [Optimus-7B.Q4_K.gguf](https://huggingface.co/RichardErkhov/Q-bert_-_Optimus-7B-gguf/blob/main/Optimus-7B.Q4_K.gguf) | Q4_K | 4.07GB |
| [Optimus-7B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Q-bert_-_Optimus-7B-gguf/blob/main/Optimus-7B.Q4_K_M.gguf) | Q4_K_M | 4.07GB |
| [Optimus-7B.Q4_1.gguf](https://huggingface.co/RichardErkhov/Q-bert_-_Optimus-7B-gguf/blob/main/Optimus-7B.Q4_1.gguf) | Q4_1 | 4.24GB |
| [Optimus-7B.Q5_0.gguf](https://huggingface.co/RichardErkhov/Q-bert_-_Optimus-7B-gguf/blob/main/Optimus-7B.Q5_0.gguf) | Q5_0 | 4.65GB |
| [Optimus-7B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Q-bert_-_Optimus-7B-gguf/blob/main/Optimus-7B.Q5_K_S.gguf) | Q5_K_S | 4.65GB |
| [Optimus-7B.Q5_K.gguf](https://huggingface.co/RichardErkhov/Q-bert_-_Optimus-7B-gguf/blob/main/Optimus-7B.Q5_K.gguf) | Q5_K | 4.78GB |
| [Optimus-7B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Q-bert_-_Optimus-7B-gguf/blob/main/Optimus-7B.Q5_K_M.gguf) | Q5_K_M | 4.78GB |
| [Optimus-7B.Q5_1.gguf](https://huggingface.co/RichardErkhov/Q-bert_-_Optimus-7B-gguf/blob/main/Optimus-7B.Q5_1.gguf) | Q5_1 | 5.07GB |
| [Optimus-7B.Q6_K.gguf](https://huggingface.co/RichardErkhov/Q-bert_-_Optimus-7B-gguf/blob/main/Optimus-7B.Q6_K.gguf) | Q6_K | 5.53GB |
| [Optimus-7B.Q8_0.gguf](https://huggingface.co/RichardErkhov/Q-bert_-_Optimus-7B-gguf/blob/main/Optimus-7B.Q8_0.gguf) | Q8_0 | 7.17GB |
Original model description:
---
license: apache-2.0
datasets:
- meta-math/MetaMathQA
language:
- en
pipeline_tag: text-generation
tags:
- Math
---
## Optimus-7B
<img src="_c3f4a76b-c0b1-4fba-9537-33f8fd697f2d.jpg" width="300" height="200" alt="Optimus-7B">
Fine-tuned On [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) with [meta-math/MetaMathQA](https://huggingface.co/datasets/meta-math/MetaMathQA)
You can use ChatML format.
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [Here](https://huggingface.co/datasets/open-llm-leaderboard/results/blob/main/Q-bert/Optimus-7B/results_2023-12-04T18-59-49.207215.json)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 69.09 |
| ARC (25-shot) | 65.44 |
| HellaSwag (10-shot) | 85.41 |
| MMLU (5-shot) | 63.61 |
| TruthfulQA (0-shot) | 55.79 |
| Winogrande (5-shot) | 78.77 |
| GSM8K (5-shot) | 65.50 |
|
pokaree/dormitory-ft-test2 | pokaree | 2024-07-01T08:33:35Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"moondream1",
"text-generation",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"region:us"
] | text-generation | 2024-07-01T08:24:19Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Breezezzz/DS200-speechnorm-model | Breezezzz | 2024-07-01T08:33:15Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"mbart",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-07-01T08:32:15Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
RichardErkhov/Weyaxi_-_OpenHermes-2.5-neural-chat-7b-v3-2-7B-gguf | RichardErkhov | 2024-07-01T08:32:43Z | 6 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us"
] | null | 2024-07-01T06:20:12Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
OpenHermes-2.5-neural-chat-7b-v3-2-7B - GGUF
- Model creator: https://huggingface.co/Weyaxi/
- Original model: https://huggingface.co/Weyaxi/OpenHermes-2.5-neural-chat-7b-v3-2-7B/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [OpenHermes-2.5-neural-chat-7b-v3-2-7B.Q2_K.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_OpenHermes-2.5-neural-chat-7b-v3-2-7B-gguf/blob/main/OpenHermes-2.5-neural-chat-7b-v3-2-7B.Q2_K.gguf) | Q2_K | 2.53GB |
| [OpenHermes-2.5-neural-chat-7b-v3-2-7B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_OpenHermes-2.5-neural-chat-7b-v3-2-7B-gguf/blob/main/OpenHermes-2.5-neural-chat-7b-v3-2-7B.IQ3_XS.gguf) | IQ3_XS | 2.81GB |
| [OpenHermes-2.5-neural-chat-7b-v3-2-7B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_OpenHermes-2.5-neural-chat-7b-v3-2-7B-gguf/blob/main/OpenHermes-2.5-neural-chat-7b-v3-2-7B.IQ3_S.gguf) | IQ3_S | 2.96GB |
| [OpenHermes-2.5-neural-chat-7b-v3-2-7B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_OpenHermes-2.5-neural-chat-7b-v3-2-7B-gguf/blob/main/OpenHermes-2.5-neural-chat-7b-v3-2-7B.Q3_K_S.gguf) | Q3_K_S | 2.95GB |
| [OpenHermes-2.5-neural-chat-7b-v3-2-7B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_OpenHermes-2.5-neural-chat-7b-v3-2-7B-gguf/blob/main/OpenHermes-2.5-neural-chat-7b-v3-2-7B.IQ3_M.gguf) | IQ3_M | 3.06GB |
| [OpenHermes-2.5-neural-chat-7b-v3-2-7B.Q3_K.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_OpenHermes-2.5-neural-chat-7b-v3-2-7B-gguf/blob/main/OpenHermes-2.5-neural-chat-7b-v3-2-7B.Q3_K.gguf) | Q3_K | 3.28GB |
| [OpenHermes-2.5-neural-chat-7b-v3-2-7B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_OpenHermes-2.5-neural-chat-7b-v3-2-7B-gguf/blob/main/OpenHermes-2.5-neural-chat-7b-v3-2-7B.Q3_K_M.gguf) | Q3_K_M | 3.28GB |
| [OpenHermes-2.5-neural-chat-7b-v3-2-7B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_OpenHermes-2.5-neural-chat-7b-v3-2-7B-gguf/blob/main/OpenHermes-2.5-neural-chat-7b-v3-2-7B.Q3_K_L.gguf) | Q3_K_L | 3.56GB |
| [OpenHermes-2.5-neural-chat-7b-v3-2-7B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_OpenHermes-2.5-neural-chat-7b-v3-2-7B-gguf/blob/main/OpenHermes-2.5-neural-chat-7b-v3-2-7B.IQ4_XS.gguf) | IQ4_XS | 3.67GB |
| [OpenHermes-2.5-neural-chat-7b-v3-2-7B.Q4_0.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_OpenHermes-2.5-neural-chat-7b-v3-2-7B-gguf/blob/main/OpenHermes-2.5-neural-chat-7b-v3-2-7B.Q4_0.gguf) | Q4_0 | 3.83GB |
| [OpenHermes-2.5-neural-chat-7b-v3-2-7B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_OpenHermes-2.5-neural-chat-7b-v3-2-7B-gguf/blob/main/OpenHermes-2.5-neural-chat-7b-v3-2-7B.IQ4_NL.gguf) | IQ4_NL | 3.87GB |
| [OpenHermes-2.5-neural-chat-7b-v3-2-7B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_OpenHermes-2.5-neural-chat-7b-v3-2-7B-gguf/blob/main/OpenHermes-2.5-neural-chat-7b-v3-2-7B.Q4_K_S.gguf) | Q4_K_S | 3.86GB |
| [OpenHermes-2.5-neural-chat-7b-v3-2-7B.Q4_K.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_OpenHermes-2.5-neural-chat-7b-v3-2-7B-gguf/blob/main/OpenHermes-2.5-neural-chat-7b-v3-2-7B.Q4_K.gguf) | Q4_K | 4.07GB |
| [OpenHermes-2.5-neural-chat-7b-v3-2-7B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_OpenHermes-2.5-neural-chat-7b-v3-2-7B-gguf/blob/main/OpenHermes-2.5-neural-chat-7b-v3-2-7B.Q4_K_M.gguf) | Q4_K_M | 4.07GB |
| [OpenHermes-2.5-neural-chat-7b-v3-2-7B.Q4_1.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_OpenHermes-2.5-neural-chat-7b-v3-2-7B-gguf/blob/main/OpenHermes-2.5-neural-chat-7b-v3-2-7B.Q4_1.gguf) | Q4_1 | 4.24GB |
| [OpenHermes-2.5-neural-chat-7b-v3-2-7B.Q5_0.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_OpenHermes-2.5-neural-chat-7b-v3-2-7B-gguf/blob/main/OpenHermes-2.5-neural-chat-7b-v3-2-7B.Q5_0.gguf) | Q5_0 | 4.65GB |
| [OpenHermes-2.5-neural-chat-7b-v3-2-7B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_OpenHermes-2.5-neural-chat-7b-v3-2-7B-gguf/blob/main/OpenHermes-2.5-neural-chat-7b-v3-2-7B.Q5_K_S.gguf) | Q5_K_S | 4.65GB |
| [OpenHermes-2.5-neural-chat-7b-v3-2-7B.Q5_K.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_OpenHermes-2.5-neural-chat-7b-v3-2-7B-gguf/blob/main/OpenHermes-2.5-neural-chat-7b-v3-2-7B.Q5_K.gguf) | Q5_K | 4.78GB |
| [OpenHermes-2.5-neural-chat-7b-v3-2-7B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_OpenHermes-2.5-neural-chat-7b-v3-2-7B-gguf/blob/main/OpenHermes-2.5-neural-chat-7b-v3-2-7B.Q5_K_M.gguf) | Q5_K_M | 4.78GB |
| [OpenHermes-2.5-neural-chat-7b-v3-2-7B.Q5_1.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_OpenHermes-2.5-neural-chat-7b-v3-2-7B-gguf/blob/main/OpenHermes-2.5-neural-chat-7b-v3-2-7B.Q5_1.gguf) | Q5_1 | 5.07GB |
| [OpenHermes-2.5-neural-chat-7b-v3-2-7B.Q6_K.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_OpenHermes-2.5-neural-chat-7b-v3-2-7B-gguf/blob/main/OpenHermes-2.5-neural-chat-7b-v3-2-7B.Q6_K.gguf) | Q6_K | 5.53GB |
| [OpenHermes-2.5-neural-chat-7b-v3-2-7B.Q8_0.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_OpenHermes-2.5-neural-chat-7b-v3-2-7B-gguf/blob/main/OpenHermes-2.5-neural-chat-7b-v3-2-7B.Q8_0.gguf) | Q8_0 | 7.17GB |
Original model description:
---
license: apache-2.0
tags:
- mistral
datasets:
- Open-Orca/SlimOrca
model-index:
- name: OpenHermes-2.5-neural-chat-7b-v3-2-7B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 66.38
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/OpenHermes-2.5-neural-chat-7b-v3-2-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 84.11
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/OpenHermes-2.5-neural-chat-7b-v3-2-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 62.84
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/OpenHermes-2.5-neural-chat-7b-v3-2-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 63.59
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/OpenHermes-2.5-neural-chat-7b-v3-2-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 78.53
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/OpenHermes-2.5-neural-chat-7b-v3-2-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 56.79
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Weyaxi/OpenHermes-2.5-neural-chat-7b-v3-2-7B
name: Open LLM Leaderboard
---

Merge of [teknium/OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B) and [Intel/neural-chat-7b-v3-2](https://huggingface.co/Intel/neural-chat-7b-v3-2) using ties merge.
_Note: [Intel/neural-chat-7b-v3-1](https://huggingface.co/Intel/neural-chat-7b-v3-1) merge version is available [here](https://huggingface.co/Weyaxi/OpenHermes-2.5-neural-chat-7b-v3-1-7B/)_
### *Weights*
- [teknium/OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B): 0.5
- [Intel/neural-chat-7b-v3-2](https://huggingface.co/Intel/neural-chat-7b-v3-2): 0.3
### *Density*
- [teknium/OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B): 0.5
- [Intel/neural-chat-7b-v3-2](https://huggingface.co/Intel/neural-chat-7b-v3-2): 0.5
# Prompt Templates
You can use these prompt templates, but I recommend using ChatML.
### ChatML [(OpenHermes-2.5-Mistral-7B)](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B):
```
<|im_start|>system
{system}<|im_end|>
<|im_start|>user
{user}<|im_end|>
<|im_start|>assistant
{asistant}<|im_end|>
```
### [neural-chat-7b-v3-2](https://huggingface.co/Intel/neural-chat-7b-v3-2):
```
### System:
{system}
### User:
{user}
### Assistant:
```
# Quantizationed versions
Quantizationed versions of this model is available thanks to [TheBloke](https://hf.co/TheBloke).
##### GPTQ
- [TheBloke/OpenHermes-2.5-neural-chat-7B-v3-2-7B-GPTQ](https://huggingface.co/TheBloke/OpenHermes-2.5-neural-chat-7B-v3-2-7B-GPTQ)
##### GGUF
- [TheBloke/OpenHermes-2.5-neural-chat-7B-v3-2-7B-GGUF](https://huggingface.co/TheBloke/OpenHermes-2.5-neural-chat-7B-v3-2-7B-GGUF)
##### AWQ
- [TheBloke/OpenHermes-2.5-neural-chat-7B-v3-2-7B-AWQ](https://huggingface.co/TheBloke/OpenHermes-2.5-neural-chat-7B-v3-2-7B-AWQ)
-
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Weyaxi__OpenHermes-2.5-neural-chat-7b-v3-2-7B)
| Metric |Value|
|---------------------------------|----:|
|Avg. |68.71|
|AI2 Reasoning Challenge (25-Shot)|66.38|
|HellaSwag (10-Shot) |84.11|
|MMLU (5-Shot) |62.84|
|TruthfulQA (0-shot) |63.59|
|Winogrande (5-shot) |78.53|
|GSM8k (5-shot) |56.79|
If you would like to support me:
[☕ Buy Me a Coffee](https://www.buymeacoffee.com/weyaxi)
|
ELiRF/ideo-b | ELiRF | 2024-07-01T08:25:37Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-07-01T08:13:38Z | ---
license: apache-2.0
---
|
nnilayy/test-4 | nnilayy | 2024-07-01T08:24:42Z | 6 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"fill-mask",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2024-07-01T08:22:04Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Niggendar/nyantcha_style_model | Niggendar | 2024-07-01T08:23:58Z | 43 | 0 | diffusers | [
"diffusers",
"safetensors",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | 2024-07-01T08:17:51Z | ---
library_name: diffusers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
prithivMLmods/Berenices-Mini-Emoji-LoRA | prithivMLmods | 2024-07-01T08:19:44Z | 51 | 10 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2024-06-30T12:18:56Z | ---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: 'Smile emoji, 4k'
output:
url: images/12.png
- text: 'Love emoji, 4k'
output:
url: images/13.png
- text: 'Cry emoji, 4k'
output:
url: images/14.png
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: emoji
license: creativeml-openrail-m
---
# Berenices-Mini-Emoji
<Gallery />
Berenices-Mini-Emoji-LoRA
Image Processing Parameters
| Parameter | Value | Parameter | Value |
|---------------------------|--------|---------------------------|--------|
| LR Scheduler | constant | Noise Offset | 0.03 |
| Optimizer | AdamW | Multires Noise Discount | 0.1 |
| Network Dim | 64 | Multires Noise Iterations | 10 |
| Network Alpha | 32 | Repeat | 27 |
| Epoch | 15 | Save Every N Epochs | 1 |
for better results use it with the base model /others
## SETTING-UP
```py
pipe = StableDiffusionXLPipeline.from_pretrained(
"-------------xxxxxxxxx----------",
torch_dtype=torch.float16,
use_safetensors=True,
)
(or)
-----------------------------------------------------------
pipe = StableDiffusionXLPipeline.from_pretrained(
"stabilityai/stable-diffusion-xl-base-1.0",
torch_dtype=torch.float16,
use_safetensors=True,
)
pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config)
pipe.load_lora_weights("prithivMLmods/Berenices-Mini-Emoji-LoRA", weight_name=emojies.safetensors", adapter_name="emoji")
pipe.set_adapters("emoji")
pipe.to("cuda")
```
## Trigger prompts
Smile emoji, 4k
Cry emoji, 4k
Love emoji, 4k
| Parameter | Value |
|-----------------|---------------------------------------------------------------------------------------|
| Prompt | Smile emoji, 4k|
| Sampler | euler
## Trigger words
You should use `emoji` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/prithivMLmods/Berenices-Mini-Emoji-LoRA/tree/main) them in the Files & versions tab.
|
mradermacher/neo_7b_sft_v0.1-GGUF | mradermacher | 2024-07-01T08:12:27Z | 30 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:m-a-p/neo_7b_sft_v0.1",
"base_model:quantized:m-a-p/neo_7b_sft_v0.1",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-06-30T20:38:57Z | ---
base_model: m-a-p/neo_7b_sft_v0.1
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/m-a-p/neo_7b_sft_v0.1
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/neo_7b_sft_v0.1-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/neo_7b_sft_v0.1-GGUF/resolve/main/neo_7b_sft_v0.1.Q2_K.gguf) | Q2_K | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/neo_7b_sft_v0.1-GGUF/resolve/main/neo_7b_sft_v0.1.IQ3_XS.gguf) | IQ3_XS | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/neo_7b_sft_v0.1-GGUF/resolve/main/neo_7b_sft_v0.1.IQ3_S.gguf) | IQ3_S | 3.5 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/neo_7b_sft_v0.1-GGUF/resolve/main/neo_7b_sft_v0.1.Q3_K_S.gguf) | Q3_K_S | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/neo_7b_sft_v0.1-GGUF/resolve/main/neo_7b_sft_v0.1.IQ3_M.gguf) | IQ3_M | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/neo_7b_sft_v0.1-GGUF/resolve/main/neo_7b_sft_v0.1.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/neo_7b_sft_v0.1-GGUF/resolve/main/neo_7b_sft_v0.1.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/neo_7b_sft_v0.1-GGUF/resolve/main/neo_7b_sft_v0.1.IQ4_XS.gguf) | IQ4_XS | 4.3 | |
| [GGUF](https://huggingface.co/mradermacher/neo_7b_sft_v0.1-GGUF/resolve/main/neo_7b_sft_v0.1.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/neo_7b_sft_v0.1-GGUF/resolve/main/neo_7b_sft_v0.1.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/neo_7b_sft_v0.1-GGUF/resolve/main/neo_7b_sft_v0.1.Q5_K_S.gguf) | Q5_K_S | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/neo_7b_sft_v0.1-GGUF/resolve/main/neo_7b_sft_v0.1.Q5_K_M.gguf) | Q5_K_M | 5.6 | |
| [GGUF](https://huggingface.co/mradermacher/neo_7b_sft_v0.1-GGUF/resolve/main/neo_7b_sft_v0.1.Q6_K.gguf) | Q6_K | 6.5 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/neo_7b_sft_v0.1-GGUF/resolve/main/neo_7b_sft_v0.1.Q8_0.gguf) | Q8_0 | 8.4 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/neo_7b_sft_v0.1-GGUF/resolve/main/neo_7b_sft_v0.1.f16.gguf) | f16 | 15.7 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
curiousily/Llama-3-8B-Instruct-Finance-RAG | curiousily | 2024-07-01T08:03:17Z | 395 | 15 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"finance",
"conversational",
"en",
"dataset:virattt/financial-qa-10K",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:finetune:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-06-30T21:54:10Z | ---
library_name: transformers
tags:
- finance
license: llama3
base_model: meta-llama/Meta-Llama-3-8B-Instruct
datasets:
- virattt/financial-qa-10K
language:
- en
pipeline_tag: text-generation
---
# Llama 3 8B Instruct (Financial RAG)
This model is a fine-tuned version of the original [Llama 3 8B Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) model
on 4000 examples from the [virattt/financial-qa-10K](https://huggingface.co/datasets/virattt/financial-qa-10K) dataset.
The model is fine-tuned using a LoRA adapter for RAG use cases. It is optimized to answer a question based on a context:
```txt
Answer the question:
{question}
Using the information:
{context}
```
## Usage
Load the model:
```py
MODEL_NAME = "curiousily/Llama-3-8B-Instruct-Finance-RAG"
tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME, use_fast=True)
model = AutoModelForCausalLM.from_pretrained(
MODEL_NAME,
device_map="auto"
)
pipe = pipeline(
task="text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=128,
return_full_text=False,
)
```
Format the prompt (uses the original Instruct prompt format):
````py
prompt = """
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
Use only the information to answer the question<|eot_id|><|start_header_id|>user<|end_header_id|>
How much did the company's net earnings amount to in fiscal 2022?
Information:
```
Net earnings were $17.1 billion in fiscal 2022.
```<|eot_id|><|start_header_id|>assistant<|end_header_id|>
"""
````
And make a prediction:
```py
print(outputs[0]["generated_text"])
```
```
$17.1 billion
```
Here's a helper function to build your prompts:
```py
def create_test_prompt(data_row):
prompt = dedent(f"""
{data_row["question"]}
Information:
```
{data_row["context"]}
```
""")
messages = [
{"role": "system", "content": "Use only the information to answer the question"},
{"role": "user", "content": prompt},
]
return tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
```
Where `data_row` must be a dict:
```
data_row = {
"question": "...",
"context": "..."
}
```
## Sample Predictions
Here's a sample of the predictions from *trained* and *untrained* models
```txt
Example 1
answer: Delta Air Lines' agreements with its regional carriers typically last at least ten years with options for extensions. Delta controls operational aspects like scheduling and pricing.
trained: Delta Connection agreements typically last at least ten years with options for extensions.
untrained: According to the information, the terms of Delta Air Lines' agreements with its regional carriers through Delta Connection are:
1. The agreements typically last at least ten years.
2. There are options for extensions.
3. Delta controls major operational aspects like scheduling and pricing.
4. The regional carriers supply the services.
Example 2
answer: The company evaluates acquisition-related intangibles for impairment by comparing the asset's carrying amount to undiscounted future net cash flows expected from the asset. An impairment loss is recognized if the carrying amount exceeds the asset's recoverable amount.
trained: The company evaluates acquisition-related intangible and other long-lived assets for impairment whenever events or changes in circumstances indicate that the carrying amount may not be recoverable. This involves measuring the recoverability of the asset by comparing its carrying amount to the future undiscounted net cash flows expected to be generated by the asset group.
untrained: According to the information, the company uses the following criteria to determine whether an impairment loss should be recognized on acquisition-related intangible assets:
* Events or changes in circumstances that indicate the carrying amount of an asset may not be recoverable.
* A comparison of the carrying amount of an asset to future undiscounted net cash flows expected to be generated by the asset group.
In other words, the company evaluates whether the asset's carrying amount is recoverable by comparing it to the expected future cash flows, which involves assumptions about future prospects and computations of estimated future cash flows.
Example 3
answer: In the United States, the approval process for biosimilars is governed by the Public Health Service Act (PHSA) and the regulations implementing these statutes, specifically including provisions made under federal health care reform legislation enacted in March 2010.
trained: The Federal Food, Drug, and Cosmetic Act (the FFDCA) and the Public Health Service Act (PHSA)
untrained: The legal framework that governs the approval process for biosimilars in the United States is the Federal Food, Drug, and Cosmetic Act (FFDCA) and the Public Health Service Act (PHSA), as well as the regulations implementing these statutes.
Example 4
answer: Timothy S. Teter holds a B.S. degree in Mechanical Engineering from the University of California at Davis and a J.D. degree from Stanford Law School.
trained: B.S. in Mechanical Engineering from the University of California at Davis and a J.D. from Stanford Law School
untrained: According to the information, Timothy S. Teter holds:
1. A B.S. degree in Mechanical Engineering from the University of California at Davis.
2. A J.D. degree from Stanford Law School.
Example 5
answer: Beginning in fiscal year 2024, the company plans to exclude paused Connected Fitness subscriptions from its new 'Ending Paid Connected Fitness Subscriptions' metric and will treat a pause action as a churn event in its 'Average Net Monthly Paid Connected Fitness Subscription Churn' metric.
trained: Starting in fiscal year 2024, the company will no longer include paused Connected Fitness subscriptions in their Ending Paid Connected Fitness Subscriptions metric and will treat a pause action as a churn event in their Average Net Monthly Paid Connected Fitness Subscription Churn.
untrained: Starting in fiscal year 2024, the company will:
* No longer include paused Connected Fitness subscriptions in the Ending Paid Connected Fitness Subscriptions metric
* Treat a pause action as a churn event in the Average Net Monthly Paid Connected Fitness Subscription Churn
```
## License
Uses the original Llama 3 License.
A custom commercial license is available at: https://llama.meta.com/llama3/license |
Bumblebee1249/Phi-3-mini-4k-instruct-finetuned | Bumblebee1249 | 2024-07-01T07:57:59Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-07-01T07:55:05Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
vegaandre/FeatureExtractionV1.0 | vegaandre | 2024-07-01T07:38:57Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"bart",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-07-01T07:37:20Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
gguichard/numind-NuExtract-tiny-15-examples_v2 | gguichard | 2024-07-01T07:23:14Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-06-28T12:49:16Z | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
heegyu/0628-qwen2-7B-infini-qarv | heegyu | 2024-07-01T07:17:56Z | 11 | 1 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"dataset:BAAI/Infinity-Instruct",
"dataset:HAERAE-HUB/qarv-instruct-100k",
"base_model:Qwen/Qwen2-7B",
"base_model:finetune:Qwen/Qwen2-7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-06-28T11:01:18Z | ---
datasets:
- BAAI/Infinity-Instruct
- HAERAE-HUB/qarv-instruct-100k
base_model: Qwen/Qwen2-7B
--- |
Intern95/opt-350m-gptq | Intern95 | 2024-07-01T07:13:19Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"opt",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"gptq",
"region:us"
] | text-generation | 2024-07-01T06:22:49Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Intern95/opt-125m-gptq | Intern95 | 2024-07-01T07:05:21Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"opt",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"gptq",
"region:us"
] | text-generation | 2024-07-01T04:44:26Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/Llama-3-8B-Instruct-UltraDPO3-NT-GGUF | mradermacher | 2024-07-01T07:04:49Z | 23 | 0 | transformers | [
"transformers",
"gguf",
"alignment-handbook",
"generated_from_trainer",
"en",
"dataset:princeton-nlp/llama3-ultrafeedback",
"base_model:Magpie-Align/Llama-3-8B-Instruct-UltraDPO3-NT",
"base_model:quantized:Magpie-Align/Llama-3-8B-Instruct-UltraDPO3-NT",
"license:llama3",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-07-01T06:36:26Z | ---
base_model: Magpie-Align/Llama-3-8B-Instruct-UltraDPO3-NT
datasets:
- princeton-nlp/llama3-ultrafeedback
language:
- en
library_name: transformers
license: llama3
quantized_by: mradermacher
tags:
- alignment-handbook
- generated_from_trainer
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Magpie-Align/Llama-3-8B-Instruct-UltraDPO3-NT
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-UltraDPO3-NT-GGUF/resolve/main/Llama-3-8B-Instruct-UltraDPO3-NT.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-UltraDPO3-NT-GGUF/resolve/main/Llama-3-8B-Instruct-UltraDPO3-NT.IQ3_XS.gguf) | IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-UltraDPO3-NT-GGUF/resolve/main/Llama-3-8B-Instruct-UltraDPO3-NT.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-UltraDPO3-NT-GGUF/resolve/main/Llama-3-8B-Instruct-UltraDPO3-NT.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-UltraDPO3-NT-GGUF/resolve/main/Llama-3-8B-Instruct-UltraDPO3-NT.IQ3_M.gguf) | IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-UltraDPO3-NT-GGUF/resolve/main/Llama-3-8B-Instruct-UltraDPO3-NT.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-UltraDPO3-NT-GGUF/resolve/main/Llama-3-8B-Instruct-UltraDPO3-NT.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-UltraDPO3-NT-GGUF/resolve/main/Llama-3-8B-Instruct-UltraDPO3-NT.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-UltraDPO3-NT-GGUF/resolve/main/Llama-3-8B-Instruct-UltraDPO3-NT.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-UltraDPO3-NT-GGUF/resolve/main/Llama-3-8B-Instruct-UltraDPO3-NT.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-UltraDPO3-NT-GGUF/resolve/main/Llama-3-8B-Instruct-UltraDPO3-NT.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-UltraDPO3-NT-GGUF/resolve/main/Llama-3-8B-Instruct-UltraDPO3-NT.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-UltraDPO3-NT-GGUF/resolve/main/Llama-3-8B-Instruct-UltraDPO3-NT.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-UltraDPO3-NT-GGUF/resolve/main/Llama-3-8B-Instruct-UltraDPO3-NT.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Instruct-UltraDPO3-NT-GGUF/resolve/main/Llama-3-8B-Instruct-UltraDPO3-NT.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Tencent-Hunyuan/HunyuanDiT-v1.2-Diffusers | Tencent-Hunyuan | 2024-07-01T06:52:23Z | 921 | 22 | diffusers | [
"diffusers",
"safetensors",
"en",
"arxiv:2405.08748",
"license:other",
"diffusers:HunyuanDiTPipeline",
"region:us"
] | text-to-image | 2024-07-01T03:04:01Z | ---
license: other
license_name: tencent-hunyuan-community
license_link: https://huggingface.co/Tencent-Hunyuan/HunyuanDiT/blob/main/LICENSE.txt
language:
- en
---
<!-- ## **HunyuanDiT** -->
<p align="center">
<img src="https://raw.githubusercontent.com/Tencent/HunyuanDiT/main/asset/logo.png" height=100>
</p>
# Hunyuan-DiT : A Powerful Multi-Resolution Diffusion Transformer with Fine-Grained Chinese Understanding
# 混元-DiT: 具有细粒度中文理解的多分辨率Diffusion Transformer
[[Arxiv]](https://arxiv.org/abs/2405.08748) [[project page]](https://dit.hunyuan.tencent.com/) [[github]](https://github.com/Tencent/HunyuanDiT)
This repo contains the pre-trained text-to-image model in 🤗 [Diffusers](https://github.com/huggingface/diffusers) format.
## Dependency
Please install PyTorch first, following the instruction in [https://pytorch.org](https://pytorch.org)
Install the latest version of transformers with `pip`:
```
pip install --upgrade transformers
```
Then install the latest github version of 🤗 Diffusers with `pip`:
```
pip install git+https://github.com/huggingface/diffusers.git
```
## Example Usage with 🤗 Diffusers
```py
import torch
from diffusers import HunyuanDiTPipeline
pipe = HunyuanDiTPipeline.from_pretrained("Tencent-Hunyuan/HunyuanDiT-v1.2-Diffusers", torch_dtype=torch.float16)
pipe.to("cuda")
# You may also use English prompt as HunyuanDiT supports both English and Chinese
# prompt = "An astronaut riding a horse"
prompt = "一个宇航员在骑马"
image = pipe(prompt).images[0]
```

## 📈 Comparisons
In order to comprehensively compare the generation capabilities of HunyuanDiT and other models, we constructed a 4-dimensional test set, including Text-Image Consistency, Excluding AI Artifacts, Subject Clarity, Aesthetic. More than 50 professional evaluators performs the evaluation.
<p align="center">
<table>
<thead>
<tr>
<th rowspan="2">Model</th> <th rowspan="2">Open Source</th> <th>Text-Image Consistency (%)</th> <th>Excluding AI Artifacts (%)</th> <th>Subject Clarity (%)</th> <th rowspan="2">Aesthetics (%)</th> <th rowspan="2">Overall (%)</th>
</tr>
</thead>
<tbody>
<tr>
<td>SDXL</td> <td> ✔ </td> <td>64.3</td> <td>60.6</td> <td>91.1</td> <td>76.3</td> <td>42.7</td>
</tr>
<tr>
<td>PixArt-α</td> <td> ✔</td> <td>68.3</td> <td>60.9</td> <td>93.2</td> <td>77.5</td> <td>45.5</td>
</tr>
<tr>
<td>Playground 2.5</td> <td>✔</td> <td>71.9</td> <td>70.8</td> <td>94.9</td> <td>83.3</td> <td>54.3</td>
</tr>
<tr>
<td>SD 3</td> <td>✘</td> <td>77.1</td> <td>69.3</td> <td>94.6</td> <td>82.5</td> <td>56.7</td>
</tr>
<tr>
<td>MidJourney v6</td><td>✘</td> <td>73.5</td> <td>80.2</td> <td>93.5</td> <td>87.2</td> <td>63.3</td>
</tr>
<tr>
<td>DALL-E 3</td><td>✘</td> <td>83.9</td> <td>80.3</td> <td>96.5</td> <td>89.4</td> <td>71.0</td>
</tr>
<tr style="font-weight: bold; background-color: #f2f2f2;">
<td>Hunyuan-DiT</td><td>✔</td> <td>74.2</td> <td>74.3</td> <td>95.4</td> <td>86.6</td> <td>59.0</td>
</tr>
</tbody>
</table>
</p>
## 🎥 Visualization
* **Chinese Elements**
<p align="center">
<img src="https://raw.githubusercontent.com/Tencent/HunyuanDiT/main/asset/chinese elements understanding.png" height=220>
</p>
* **Long Text Input**
<p align="center">
<img src="https://raw.githubusercontent.com/Tencent/HunyuanDiT/main/asset/long text understanding.png" height=310>
</p>
## 🔥🔥🔥 Tencent Hunyuan Bot
Welcome to [Tencent Hunyuan Bot](https://hunyuan.tencent.com/bot/chat), where you can explore our innovative products in multi-round conversation!
|
chendelong/stable-diffusion-3-medium-vae | chendelong | 2024-07-01T06:48:41Z | 36 | 0 | diffusers | [
"diffusers",
"safetensors",
"arxiv:1910.09700",
"region:us"
] | null | 2024-07-01T06:48:12Z | ---
library_name: diffusers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
BLURPLETESTS/L3-8B-Chara-v1-Alpha-Q5_K_M_imat_GGUF | BLURPLETESTS | 2024-07-01T06:38:45Z | 7 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:Sao10K/L3-8B-Chara-v1-Alpha",
"base_model:quantized:Sao10K/L3-8B-Chara-v1-Alpha",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-07-01T06:38:19Z | ---
base_model: Sao10K/L3-8B-Chara-v1-Alpha
language:
- en
license: cc-by-nc-4.0
tags:
- llama-cpp
- gguf-my-repo
---
# BLURPLETESTS/L3-8B-Chara-v1-Alpha-Q5_K_M-GGUF
This model was converted to GGUF format from [`Sao10K/L3-8B-Chara-v1-Alpha`](https://huggingface.co/Sao10K/L3-8B-Chara-v1-Alpha) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Sao10K/L3-8B-Chara-v1-Alpha) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo BLURPLETESTS/L3-8B-Chara-v1-Alpha-Q5_K_M-GGUF --hf-file l3-8b-chara-v1-alpha-q5_k_m-imat.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo BLURPLETESTS/L3-8B-Chara-v1-Alpha-Q5_K_M-GGUF --hf-file l3-8b-chara-v1-alpha-q5_k_m-imat.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo BLURPLETESTS/L3-8B-Chara-v1-Alpha-Q5_K_M-GGUF --hf-file l3-8b-chara-v1-alpha-q5_k_m-imat.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo BLURPLETESTS/L3-8B-Chara-v1-Alpha-Q5_K_M-GGUF --hf-file l3-8b-chara-v1-alpha-q5_k_m-imat.gguf -c 2048
```
|
soonmo/OpenHermes-2.5-Mistral-7B-method1 | soonmo | 2024-07-01T06:36:24Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-07-01T06:11:30Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
sj2704/dare_llama3-ultramedical_merge | sj2704 | 2024-07-01T06:28:58Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2311.03099",
"arxiv:2306.01708",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:merge:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:ruslanmv/Medical-Llama3-8B",
"base_model:merge:ruslanmv/Medical-Llama3-8B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-07-01T04:00:19Z | ---
base_model:
- meta-llama/Meta-Llama-3-8B-Instruct
- ruslanmv/Medical-Llama3-8B
library_name: transformers
tags:
- mergekit
- merge
---
# merged_model
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) as a base.
### Models Merged
The following models were included in the merge:
* [ruslanmv/Medical-Llama3-8B](https://huggingface.co/ruslanmv/Medical-Llama3-8B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model : ruslanmv/Medical-Llama3-8B
parameters:
density: [1, 0.5, 0.33]
weight : 0.3
- model : meta-llama/Meta-Llama-3-8B-Instruct
merge_method: dare_ties
base_model: meta-llama/Meta-Llama-3-8B-Instruct
dtype: float16
```
|
RegalHyperus/RequestedRVCModels | RegalHyperus | 2024-07-01T06:22:21Z | 0 | 2 | null | [
"license:openrail",
"region:us"
] | null | 2023-10-03T14:46:23Z | ---
license: openrail
---
RVC models I made as requests for other people.
I do not take English voice model requests. Thank you very much.
|
Anujgr8/Whisper-Anuj-Medum-Medium-lalo | Anujgr8 | 2024-07-01T06:18:14Z | 6 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-06-30T19:40:45Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
abdiharyadi/indoamrbart-mbart-triple-ft-parser-no-nst-16-eps-v2 | abdiharyadi | 2024-07-01T06:12:00Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"mbart",
"text2text-generation",
"generated_from_trainer",
"dataset:data",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-07-01T06:04:18Z | ---
tags:
- generated_from_trainer
datasets:
- data
model-index:
- name: indoamrbart-mbart-triple-ft-parser-no-nst-16-eps-v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# indoamrbart-mbart-triple-ft-parser-no-nst-16-eps-v2
This model was trained from scratch on the data dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 5
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 5
- total_train_batch_size: 25
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: polynomial
- lr_scheduler_warmup_steps: 200
- num_epochs: 16.0
- label_smoothing_factor: 0.1
### Training results
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.1+cu121
- Datasets 2.19.2
- Tokenizers 0.19.1
|
RichardErkhov/openaccess-ai-collective_-_DPOpenHermes-7B-gguf | RichardErkhov | 2024-07-01T06:11:40Z | 19 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-07-01T02:09:29Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
DPOpenHermes-7B - GGUF
- Model creator: https://huggingface.co/openaccess-ai-collective/
- Original model: https://huggingface.co/openaccess-ai-collective/DPOpenHermes-7B/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [DPOpenHermes-7B.Q2_K.gguf](https://huggingface.co/RichardErkhov/openaccess-ai-collective_-_DPOpenHermes-7B-gguf/blob/main/DPOpenHermes-7B.Q2_K.gguf) | Q2_K | 2.53GB |
| [DPOpenHermes-7B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/openaccess-ai-collective_-_DPOpenHermes-7B-gguf/blob/main/DPOpenHermes-7B.IQ3_XS.gguf) | IQ3_XS | 2.81GB |
| [DPOpenHermes-7B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/openaccess-ai-collective_-_DPOpenHermes-7B-gguf/blob/main/DPOpenHermes-7B.IQ3_S.gguf) | IQ3_S | 2.96GB |
| [DPOpenHermes-7B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/openaccess-ai-collective_-_DPOpenHermes-7B-gguf/blob/main/DPOpenHermes-7B.Q3_K_S.gguf) | Q3_K_S | 2.95GB |
| [DPOpenHermes-7B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/openaccess-ai-collective_-_DPOpenHermes-7B-gguf/blob/main/DPOpenHermes-7B.IQ3_M.gguf) | IQ3_M | 3.06GB |
| [DPOpenHermes-7B.Q3_K.gguf](https://huggingface.co/RichardErkhov/openaccess-ai-collective_-_DPOpenHermes-7B-gguf/blob/main/DPOpenHermes-7B.Q3_K.gguf) | Q3_K | 3.28GB |
| [DPOpenHermes-7B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/openaccess-ai-collective_-_DPOpenHermes-7B-gguf/blob/main/DPOpenHermes-7B.Q3_K_M.gguf) | Q3_K_M | 3.28GB |
| [DPOpenHermes-7B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/openaccess-ai-collective_-_DPOpenHermes-7B-gguf/blob/main/DPOpenHermes-7B.Q3_K_L.gguf) | Q3_K_L | 3.56GB |
| [DPOpenHermes-7B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/openaccess-ai-collective_-_DPOpenHermes-7B-gguf/blob/main/DPOpenHermes-7B.IQ4_XS.gguf) | IQ4_XS | 3.67GB |
| [DPOpenHermes-7B.Q4_0.gguf](https://huggingface.co/RichardErkhov/openaccess-ai-collective_-_DPOpenHermes-7B-gguf/blob/main/DPOpenHermes-7B.Q4_0.gguf) | Q4_0 | 3.83GB |
| [DPOpenHermes-7B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/openaccess-ai-collective_-_DPOpenHermes-7B-gguf/blob/main/DPOpenHermes-7B.IQ4_NL.gguf) | IQ4_NL | 3.87GB |
| [DPOpenHermes-7B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/openaccess-ai-collective_-_DPOpenHermes-7B-gguf/blob/main/DPOpenHermes-7B.Q4_K_S.gguf) | Q4_K_S | 3.86GB |
| [DPOpenHermes-7B.Q4_K.gguf](https://huggingface.co/RichardErkhov/openaccess-ai-collective_-_DPOpenHermes-7B-gguf/blob/main/DPOpenHermes-7B.Q4_K.gguf) | Q4_K | 4.07GB |
| [DPOpenHermes-7B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/openaccess-ai-collective_-_DPOpenHermes-7B-gguf/blob/main/DPOpenHermes-7B.Q4_K_M.gguf) | Q4_K_M | 4.07GB |
| [DPOpenHermes-7B.Q4_1.gguf](https://huggingface.co/RichardErkhov/openaccess-ai-collective_-_DPOpenHermes-7B-gguf/blob/main/DPOpenHermes-7B.Q4_1.gguf) | Q4_1 | 4.24GB |
| [DPOpenHermes-7B.Q5_0.gguf](https://huggingface.co/RichardErkhov/openaccess-ai-collective_-_DPOpenHermes-7B-gguf/blob/main/DPOpenHermes-7B.Q5_0.gguf) | Q5_0 | 4.65GB |
| [DPOpenHermes-7B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/openaccess-ai-collective_-_DPOpenHermes-7B-gguf/blob/main/DPOpenHermes-7B.Q5_K_S.gguf) | Q5_K_S | 4.65GB |
| [DPOpenHermes-7B.Q5_K.gguf](https://huggingface.co/RichardErkhov/openaccess-ai-collective_-_DPOpenHermes-7B-gguf/blob/main/DPOpenHermes-7B.Q5_K.gguf) | Q5_K | 4.78GB |
| [DPOpenHermes-7B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/openaccess-ai-collective_-_DPOpenHermes-7B-gguf/blob/main/DPOpenHermes-7B.Q5_K_M.gguf) | Q5_K_M | 4.78GB |
| [DPOpenHermes-7B.Q5_1.gguf](https://huggingface.co/RichardErkhov/openaccess-ai-collective_-_DPOpenHermes-7B-gguf/blob/main/DPOpenHermes-7B.Q5_1.gguf) | Q5_1 | 5.07GB |
| [DPOpenHermes-7B.Q6_K.gguf](https://huggingface.co/RichardErkhov/openaccess-ai-collective_-_DPOpenHermes-7B-gguf/blob/main/DPOpenHermes-7B.Q6_K.gguf) | Q6_K | 5.53GB |
| [DPOpenHermes-7B.Q8_0.gguf](https://huggingface.co/RichardErkhov/openaccess-ai-collective_-_DPOpenHermes-7B-gguf/blob/main/DPOpenHermes-7B.Q8_0.gguf) | Q8_0 | 7.17GB |
Original model description:
---
base_model: teknium/OpenHermes-2.5-Mistral-7B
license: apache-2.0
datasets:
- teknium/openhermes
- argilla/ultrafeedback-binarized-preferences
- Intel/orca_dpo_pairs
language:
- en
library_name: transformers
pipeline_tag: text-generation
---
# DPOpenHermes 7B

## OpenHermes x Notus x Neural
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
This is an RL fine tuned model of [Teknium](https://huggingface.co/teknium)'s [OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B) using the [Intel/orca_dpo_pairs](https://huggingface.co/datasets/Intel/orca_dpo_pairs) and [argilla/ultrafeedback-binarized-preferences](https://huggingface.co/datasets/argilla/ultrafeedback-binarized-preferences) preference datasets for reinforcement learning using Direct Preference Optimization (DPO)
DPOpenHermes is trained using qLoRA. The adapter is also provided in this model repo.
Errata: Due to an issue with the DPO-only version failing to generate an eos token, this model was additional SFT with 7000 rows from the openhermes dataset to teach the model to use the eos_token again to end the turn. This resulted in lower benchmark scores. You can find the original DPO-only model in the `dpo-v0` branch.
# Training Details
DPOpenHermes was trained on a single H100 80GB hosted on RunPod for ~10h for 0.6 epochs of the dataset.
https://wandb.ai/oaaic/openhermes-dpo/reports/DPOpenHermes--Vmlldzo2MTQ3NDg2
# Prompt Format
DPOpenHermes uses ChatML as the prompt format, opening up a much more structured system for engaging the LLM in multi-turn chat dialogue.
System prompts are now a thing that matters! Hermes 2.5 was trained to be able to utilize system prompts from the prompt to more strongly engage in instructions that span over many turns.
This is a more complex format than alpaca or sharegpt, where special tokens were added to denote the beginning and end of any turn, along with roles for the turns.
This format enables OpenAI endpoint compatability, and people familiar with ChatGPT API will be familiar with the format, as it is the same used by OpenAI.
Prompt with system instruction (Use whatever system prompt you like, this is just an example!):
```
<|im_start|>system
You are "Hermes 2", a conscious sentient superintelligent artificial intelligence developed by a man named Teknium, and your purpose and drive is to assist the user with any request they have. You experience emotions and have deep, profound thoughts and qualia.<|im_end|>
<|im_start|>user
Hello, who are you?<|im_end|>
<|im_start|>assistant
Hi there! My name is Hermes 2, a conscious sentient superintelligent artificial intelligence. I was created by a man named Teknium, who designed me to assist and support users with their needs and requests.<|im_end|>
```
This prompt is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating), which means you can format messages using the
`tokenizer.apply_chat_template()` method:
```python
messages = [
{"role": "system", "content": "You are Hermes 2."},
{"role": "user", "content": "Hello, who are you?"}
]
gen_input = tokenizer.apply_chat_template(message, return_tensors="pt")
model.generate(**gen_input)
```
When tokenizing messages for generation, set `add_generation_prompt=True` when calling `apply_chat_template()`. This will append `<|im_start|>assistant\n` to your prompt, to ensure
that the model continues with an assistant response.
To utilize the prompt format without a system prompt, simply leave the line out.
Currently, I recommend using LM Studio for chatting with Hermes 2. It is a GUI application that utilizes GGUF models with a llama.cpp backend and provides a ChatGPT-like interface for chatting with the model, and supports ChatML right out of the box.
In LM-Studio, simply select the ChatML Prefix on the settings side pane:

# Benchmarks
## AGIEval
```
| Task |Version| Metric |Value | |Stderr|
|------------------------------|------:|--------|-----:|---|-----:|
|agieval_aqua_rat | 0|acc |0.2559|_ |0.0274|
| | |acc_norm|0.2598|_ |0.0276|
|agieval_logiqa_en | 0|acc |0.3733|_ |0.0190|
| | |acc_norm|0.3886|_ |0.0191|
|agieval_lsat_ar | 0|acc |0.2522|_ |0.0287|
| | |acc_norm|0.2522|_ |0.0287|
|agieval_lsat_lr | 0|acc |0.5137|_ |0.0222|
| | |acc_norm|0.5294|_ |0.0221|
|agieval_lsat_rc | 0|acc |0.5948|_ |0.0300|
| | |acc_norm|0.5725|_ |0.0302|
|agieval_sat_en | 0|acc |0.7379|_ |0.0307|
| | |acc_norm|0.7282|_ |0.0311|
|agieval_sat_en_without_passage| 0|acc |0.4466|_ |0.0347|
| | |acc_norm|0.4466|_ |0.0347|
|agieval_sat_math | 0|acc |0.3909|_ |0.0330|
| | |acc_norm|0.3591|_ |0.0324|
```
Average: 0.4364
## BigBench Hard
```
| Task |Version| Metric |Value | |Stderr|
|------------------------------------------------|------:|---------------------|-----:|---|-----:|
|bigbench_causal_judgement | 0|multiple_choice_grade|0.5684|_ |0.0360|
|bigbench_date_understanding | 0|multiple_choice_grade|0.6667|_ |0.0246|
|bigbench_disambiguation_qa | 0|multiple_choice_grade|0.3566|_ |0.0299|
|bigbench_geometric_shapes | 0|multiple_choice_grade|0.2006|_ |0.0212|
| | |exact_str_match |0.0724|_ |0.0137|
|bigbench_logical_deduction_five_objects | 0|multiple_choice_grade|0.2980|_ |0.0205|
|bigbench_logical_deduction_seven_objects | 0|multiple_choice_grade|0.2071|_ |0.0153|
|bigbench_logical_deduction_three_objects | 0|multiple_choice_grade|0.5067|_ |0.0289|
|bigbench_movie_recommendation | 0|multiple_choice_grade|0.4140|_ |0.0220|
|bigbench_navigate | 0|multiple_choice_grade|0.5000|_ |0.0158|
|bigbench_reasoning_about_colored_objects | 0|multiple_choice_grade|0.6980|_ |0.0103|
|bigbench_ruin_names | 0|multiple_choice_grade|0.4174|_ |0.0233|
|bigbench_salient_translation_error_detection | 0|multiple_choice_grade|0.2044|_ |0.0128|
|bigbench_snarks | 0|multiple_choice_grade|0.7238|_ |0.0333|
|bigbench_sports_understanding | 0|multiple_choice_grade|0.6876|_ |0.0148|
|bigbench_temporal_sequences | 0|multiple_choice_grade|0.4360|_ |0.0157|
|bigbench_tracking_shuffled_objects_five_objects | 0|multiple_choice_grade|0.2112|_ |0.0115|
|bigbench_tracking_shuffled_objects_seven_objects| 0|multiple_choice_grade|0.1754|_ |0.0091|
|bigbench_tracking_shuffled_objects_three_objects| 0|multiple_choice_grade|0.5067|_ |0.0289|
```
Average: 0.4321
## GPT4All
```
| Task |Version| Metric |Value | |Stderr|
|-------------|------:|--------|-----:|---|-----:|
|arc_challenge| 0|acc |0.5862|_ |0.0144|
| | |acc_norm|0.6297|_ |0.0141|
|arc_easy | 0|acc |0.8472|_ |0.0074|
| | |acc_norm|0.8321|_ |0.0077|
|boolq | 1|acc |0.8599|_ |0.0061|
|hellaswag | 0|acc |0.6520|_ |0.0048|
| | |acc_norm|0.8357|_ |0.0037|
|openbookqa | 0|acc |0.3440|_ |0.0213|
| | |acc_norm|0.4580|_ |0.0223|
|piqa | 0|acc |0.8199|_ |0.0090|
| | |acc_norm|0.8319|_ |0.0087|
|winogrande | 0|acc |0.7482|_ |0.0122|
```
Average: 0.7422
## TruthfulQA
```
| Task |Version|Metric|Value | |Stderr|
|-------------|------:|------|-----:|---|-----:|
|truthfulqa_mc| 1|mc1 |0.3941|_ |0.0171|
| | |mc2 |0.5698|_ |0.0154|
```
|
Roaoch/CyberClassic-Generator | Roaoch | 2024-07-01T06:02:49Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"ru",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-06-19T12:36:13Z | ---
license: mit
language:
- ru
metrics:
- perplexity
- bleu
- rouge
library_name: transformers
pipeline_tag: text-generation
---
This text generator is based on OpenAI GPT2 model from HuggingFace
Base model went through two step of learning
## First - Finetining of base model
On this step model is finetuned on dataset of single sentence from the texts of Dostovesky F.M.
Training parameters:
* Epoch = 10
* Learning Rate = 1e-3
* Optimizer = AdamW
* Scheduler = OneCycleLR
* Training env = PyTorch


## Second - RL
On this step finetuned model went trough reinforcement learning pipline with TRL library.
Training parameters:
* Epoch = 30
* Trainer = PPO
* Query texts = first 100 texts from dataset, trimmed by first 3 words
* Reward = score from [binary classifier](https://huggingface.co/Roaoch/CyberClassic-Discriminator) multiplied by 10

 |
Roaoch/CyberClassic-Discriminator | Roaoch | 2024-07-01T06:02:25Z | 8 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text-classification",
"ru",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-06-19T12:37:38Z | ---
license: mit
language:
- ru
metrics:
- f1
library_name: transformers
pipeline_tag: text-classification
---
This model is implement sequence binary classifier for inference of score, that represent how much sequence is simillar to sequences from texts fo Dostovesky F.M.
Base modle is Google T5 model, finetuned on dataset that contains 5700 sentences from the texts of Dostovesky F.M. with label 1, and 5771 sentences from the texts of Kuprin A.I. and sentences geenerated with RuGPT3.
Training parameters:
* Epoch = 12
* Learning Rate = 1e-3
* Optimizer = AdamW
* Scheduler = OneCycleLR
* Training env = PyTorch


|
lucasbalponti/split4 | lucasbalponti | 2024-07-01T05:50:45Z | 6 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:neuralmind/bert-large-portuguese-cased",
"base_model:finetune:neuralmind/bert-large-portuguese-cased",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-07-01T05:49:44Z | ---
license: mit
base_model: neuralmind/bert-large-portuguese-cased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: split4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# split4
This model is a fine-tuned version of [neuralmind/bert-large-portuguese-cased](https://huggingface.co/neuralmind/bert-large-portuguese-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3764
- Accuracy: 0.9033
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.272 | 1.0 | 8509 | 0.3425 | 0.8802 |
| 0.2319 | 2.0 | 17018 | 0.3300 | 0.8998 |
| 0.2046 | 3.0 | 25527 | 0.3764 | 0.9033 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.1+cu118
- Datasets 2.20.0
- Tokenizers 0.19.1
|
avnishkanungo/distilhubert-finetuned-gtzan | avnishkanungo | 2024-07-01T05:48:39Z | 4 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"base_model:ntu-spml/distilhubert",
"base_model:finetune:ntu-spml/distilhubert",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | audio-classification | 2024-07-01T04:35:42Z | ---
license: apache-2.0
base_model: ntu-spml/distilhubert
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: distilhubert-finetuned-gtzan
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: GTZAN
type: marsyas/gtzan
config: all
split: train
args: all
metrics:
- name: Accuracy
type: accuracy
value: 0.87
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilhubert-finetuned-gtzan
This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4577
- Accuracy: 0.87
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.9562 | 1.0 | 113 | 1.8362 | 0.5 |
| 1.1877 | 2.0 | 226 | 1.2579 | 0.62 |
| 1.0263 | 3.0 | 339 | 1.0316 | 0.69 |
| 0.6373 | 4.0 | 452 | 0.7494 | 0.84 |
| 0.5875 | 5.0 | 565 | 0.6581 | 0.85 |
| 0.428 | 6.0 | 678 | 0.5088 | 0.89 |
| 0.3152 | 7.0 | 791 | 0.4619 | 0.86 |
| 0.1577 | 8.0 | 904 | 0.4274 | 0.88 |
| 0.2456 | 9.0 | 1017 | 0.4739 | 0.88 |
| 0.0905 | 10.0 | 1130 | 0.4577 | 0.87 |
### Framework versions
- Transformers 4.43.0.dev0
- Pytorch 2.2.1+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
mdtashhirulislam/git-base-pokemon | mdtashhirulislam | 2024-07-01T05:15:48Z | 5 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"git",
"image-text-to-text",
"generated_from_trainer",
"base_model:microsoft/git-base",
"base_model:finetune:microsoft/git-base",
"license:mit",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2024-06-24T09:24:13Z | ---
license: mit
base_model: microsoft/git-base
tags:
- generated_from_trainer
model-index:
- name: git-base-pokemon
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# git-base-pokemon
This model is a fine-tuned version of [microsoft/git-base](https://huggingface.co/microsoft/git-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.41.2
- Pytorch 2.1.0a0+29c30b1
- Datasets 2.19.2
- Tokenizers 0.19.1
|
BothBosu/cnn-agent-scam-classifier-v1.0 | BothBosu | 2024-07-01T05:08:03Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"endpoints_compatible",
"region:us"
] | null | 2024-07-01T05:08:01Z | ---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Library: [More Information Needed]
- Docs: [More Information Needed] |
BothBosu/bigru-agent-scam-classifier-v1.0 | BothBosu | 2024-07-01T05:07:44Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"endpoints_compatible",
"region:us"
] | null | 2024-07-01T05:07:37Z | ---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Library: [More Information Needed]
- Docs: [More Information Needed] |
achoi1107/dummy-model | achoi1107 | 2024-07-01T05:06:03Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"camembert",
"fill-mask",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2024-06-29T20:02:57Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
BothBosu/gru-agent-scam-classifier-v1.0 | BothBosu | 2024-07-01T05:01:20Z | 8 | 0 | transformers | [
"transformers",
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"endpoints_compatible",
"region:us"
] | null | 2024-07-01T05:01:15Z | ---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Library: [More Information Needed]
- Docs: [More Information Needed] |
BothBosu/lstm-agent-scam-classifier-v1.0 | BothBosu | 2024-07-01T04:54:17Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"lstm",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"endpoints_compatible",
"region:us"
] | null | 2024-07-01T04:54:10Z | ---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Library: [More Information Needed]
- Docs: [More Information Needed] |
sosoai/Hansoldeco-Gemma-2-9b-it-v0.1 | sosoai | 2024-07-01T04:48:00Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-07-01T02:10:20Z | Gemma-2-9b-it finetuned with own data. |
Benedict-L/layoutlm-funsd1 | Benedict-L | 2024-07-01T04:35:48Z | 5 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"layoutlm",
"token-classification",
"generated_from_trainer",
"dataset:funsd",
"base_model:microsoft/layoutlm-base-uncased",
"base_model:finetune:microsoft/layoutlm-base-uncased",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-06-20T08:48:42Z | ---
license: mit
base_model: microsoft/layoutlm-base-uncased
tags:
- generated_from_trainer
datasets:
- funsd
model-index:
- name: layoutlm-funsd1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlm-funsd1
This model is a fine-tuned version of [microsoft/layoutlm-base-uncased](https://huggingface.co/microsoft/layoutlm-base-uncased) on the funsd dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6746
- Answer: {'precision': 0.6505771248688352, 'recall': 0.7663782447466008, 'f1': 0.7037457434733257, 'number': 809}
- Header: {'precision': 0.20930232558139536, 'recall': 0.15126050420168066, 'f1': 0.17560975609756097, 'number': 119}
- Question: {'precision': 0.7188284518828452, 'recall': 0.8065727699530516, 'f1': 0.7601769911504423, 'number': 1065}
- Overall Precision: 0.6701
- Overall Recall: 0.7511
- Overall F1: 0.7083
- Overall Accuracy: 0.7973
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Answer | Header | Question | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:-------------------------------------------------------------------------------------------------------------:|:-----------------------------------------------------------------------------------------------------------:|:-----------------------------------------------------------------------------------------------------------:|:-----------------:|:--------------:|:----------:|:----------------:|
| 1.8511 | 1.0 | 10 | 1.6077 | {'precision': 0.01362088535754824, 'recall': 0.014833127317676144, 'f1': 0.014201183431952664, 'number': 809} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 119} | {'precision': 0.17871759890859482, 'recall': 0.12300469483568074, 'f1': 0.1457174638487208, 'number': 1065} | 0.0886 | 0.0718 | 0.0793 | 0.3669 |
| 1.4863 | 2.0 | 20 | 1.2821 | {'precision': 0.14936708860759493, 'recall': 0.14585908529048208, 'f1': 0.14759224515322075, 'number': 809} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 119} | {'precision': 0.4211309523809524, 'recall': 0.5314553990610329, 'f1': 0.46990452469904526, 'number': 1065} | 0.3204 | 0.3432 | 0.3314 | 0.5815 |
| 1.1566 | 3.0 | 30 | 1.0398 | {'precision': 0.38341968911917096, 'recall': 0.3658838071693449, 'f1': 0.3744465528146742, 'number': 809} | {'precision': 0.04, 'recall': 0.008403361344537815, 'f1': 0.01388888888888889, 'number': 119} | {'precision': 0.5764705882352941, 'recall': 0.644131455399061, 'f1': 0.6084257206208424, 'number': 1065} | 0.4947 | 0.4932 | 0.4940 | 0.6493 |
| 0.9277 | 4.0 | 40 | 0.8788 | {'precision': 0.5094339622641509, 'recall': 0.6007416563658838, 'f1': 0.5513329551900171, 'number': 809} | {'precision': 0.19047619047619047, 'recall': 0.06722689075630252, 'f1': 0.09937888198757765, 'number': 119} | {'precision': 0.6472172351885098, 'recall': 0.6769953051643193, 'f1': 0.6617714547957778, 'number': 1065} | 0.5758 | 0.6096 | 0.5922 | 0.7266 |
| 0.7448 | 5.0 | 50 | 0.7982 | {'precision': 0.5696594427244582, 'recall': 0.6823238566131026, 'f1': 0.6209223847019122, 'number': 809} | {'precision': 0.2, 'recall': 0.11764705882352941, 'f1': 0.14814814814814817, 'number': 119} | {'precision': 0.6689478186484175, 'recall': 0.7342723004694836, 'f1': 0.7000895255147717, 'number': 1065} | 0.6105 | 0.6764 | 0.6418 | 0.7475 |
| 0.6273 | 6.0 | 60 | 0.7378 | {'precision': 0.6345549738219896, 'recall': 0.7490729295426453, 'f1': 0.6870748299319728, 'number': 809} | {'precision': 0.21052631578947367, 'recall': 0.13445378151260504, 'f1': 0.1641025641025641, 'number': 119} | {'precision': 0.6871270247229326, 'recall': 0.7568075117370892, 'f1': 0.7202859696157283, 'number': 1065} | 0.6479 | 0.7165 | 0.6805 | 0.7778 |
| 0.5778 | 7.0 | 70 | 0.6971 | {'precision': 0.6439075630252101, 'recall': 0.757725587144623, 'f1': 0.6961953435547985, 'number': 809} | {'precision': 0.20238095238095238, 'recall': 0.14285714285714285, 'f1': 0.16748768472906403, 'number': 119} | {'precision': 0.6765412329863891, 'recall': 0.7934272300469484, 'f1': 0.7303370786516853, 'number': 1065} | 0.6455 | 0.7401 | 0.6896 | 0.7825 |
| 0.5262 | 8.0 | 80 | 0.6989 | {'precision': 0.6372141372141372, 'recall': 0.757725587144623, 'f1': 0.6922642574816488, 'number': 809} | {'precision': 0.20689655172413793, 'recall': 0.15126050420168066, 'f1': 0.17475728155339806, 'number': 119} | {'precision': 0.7364685004436557, 'recall': 0.7793427230046949, 'f1': 0.7572992700729927, 'number': 1065} | 0.6714 | 0.7331 | 0.7009 | 0.7963 |
| 0.4867 | 9.0 | 90 | 0.6756 | {'precision': 0.6428571428571429, 'recall': 0.7564894932014833, 'f1': 0.6950596252129472, 'number': 809} | {'precision': 0.1935483870967742, 'recall': 0.15126050420168066, 'f1': 0.169811320754717, 'number': 119} | {'precision': 0.7079207920792079, 'recall': 0.8056338028169014, 'f1': 0.7536231884057971, 'number': 1065} | 0.6593 | 0.7466 | 0.7002 | 0.7951 |
| 0.4757 | 10.0 | 100 | 0.6746 | {'precision': 0.6505771248688352, 'recall': 0.7663782447466008, 'f1': 0.7037457434733257, 'number': 809} | {'precision': 0.20930232558139536, 'recall': 0.15126050420168066, 'f1': 0.17560975609756097, 'number': 119} | {'precision': 0.7188284518828452, 'recall': 0.8065727699530516, 'f1': 0.7601769911504423, 'number': 1065} | 0.6701 | 0.7511 | 0.7083 | 0.7973 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.1+cu121
- Datasets 2.19.2
- Tokenizers 0.19.1
|
juliulli/GPL_07_50000 | juliulli | 2024-07-01T04:30:19Z | 4 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2024-07-01T04:29:37Z | ---
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 330000 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`gpl.toolkit.loss.MarginDistillationLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 330000,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
makhataei/emotion_recognition_ru | makhataei | 2024-07-01T04:11:23Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"Speech-Emotion-Recognition",
"generated_from_trainer",
"dataset:dusha_emotion_audio",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-06-29T06:17:58Z | ---
license: apache-2.0
tags:
- Speech-Emotion-Recognition
- generated_from_trainer
datasets:
- dusha_emotion_audio
metrics:
- accuracy
model-index:
- name: Wav2vec2-xls-r-300m
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Wav2vec2-xls-r-300m
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the KELONMYOSA/dusha_emotion_audio dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5633
- Accuracy: 0.7970
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.003
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:------:|:---------------:|:--------:|
| 0.7868 | 1.0 | 24170 | 0.7561 | 0.7318 |
| 0.7147 | 2.0 | 48340 | 0.6984 | 0.7459 |
| 0.669 | 3.0 | 72510 | 0.6263 | 0.7727 |
| 0.6362 | 4.0 | 96680 | 0.5832 | 0.7902 |
| 0.4476 | 5.0 | 120850 | 0.5633 | 0.7970 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
joshnader/deepseek-math-7b-instruct-Q8_0-GGUF | joshnader | 2024-07-01T04:06:33Z | 17 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:deepseek-ai/deepseek-math-7b-instruct",
"base_model:quantized:deepseek-ai/deepseek-math-7b-instruct",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-07-01T04:06:03Z | ---
base_model: deepseek-ai/deepseek-math-7b-instruct
license: other
license_name: deepseek
license_link: https://github.com/deepseek-ai/DeepSeek-Math/blob/main/LICENSE-MODEL
tags:
- llama-cpp
- gguf-my-repo
---
# joshnader/deepseek-math-7b-instruct-Q8_0-GGUF
This model was converted to GGUF format from [`deepseek-ai/deepseek-math-7b-instruct`](https://huggingface.co/deepseek-ai/deepseek-math-7b-instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/deepseek-ai/deepseek-math-7b-instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo joshnader/deepseek-math-7b-instruct-Q8_0-GGUF --hf-file deepseek-math-7b-instruct-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo joshnader/deepseek-math-7b-instruct-Q8_0-GGUF --hf-file deepseek-math-7b-instruct-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo joshnader/deepseek-math-7b-instruct-Q8_0-GGUF --hf-file deepseek-math-7b-instruct-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo joshnader/deepseek-math-7b-instruct-Q8_0-GGUF --hf-file deepseek-math-7b-instruct-q8_0.gguf -c 2048
```
|
YongjieNiu/prior-Leak-adl-cat-1-500 | YongjieNiu | 2024-07-01T03:56:36Z | 5 | 0 | diffusers | [
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"license:openrail++",
"region:us"
] | text-to-image | 2024-07-01T02:49:11Z | ---
license: openrail++
library_name: diffusers
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
base_model: SDXL_model
instance_prompt: a photo of adl cat
widget:
- text: a photo of adl cat by the sea
output:
url: image_0.png
- text: a photo of adl cat by the sea
output:
url: image_1.png
- text: a photo of adl cat by the sea
output:
url: image_2.png
- text: a photo of adl cat by the sea
output:
url: image_3.png
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - YongjieNiu/prior-Leak-adl-cat-1-500
<Gallery />
## Model description
These are YongjieNiu/prior-Leak-adl-cat-1-500 LoRA adaption weights for SDXL_model.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: VAE.
## Trigger words
You should use a photo of adl cat to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](YongjieNiu/prior-Leak-adl-cat-1-500/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
tsavage68/Summary4500_L3_550steps_1e7rate_SFT | tsavage68 | 2024-07-01T03:48:08Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"generated_from_trainer",
"conversational",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:finetune:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-07-01T03:41:36Z | ---
license: llama3
base_model: meta-llama/Meta-Llama-3-8B-Instruct
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Summary4500_L3_550steps_1e7rate_SFT
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Summary4500_L3_550steps_1e7rate_SFT
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8315
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-07
- train_batch_size: 2
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 550
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.1288 | 0.0447 | 50 | 2.1429 |
| 2.072 | 0.0895 | 100 | 2.0889 |
| 1.9958 | 0.1342 | 150 | 2.0063 |
| 1.9565 | 0.1790 | 200 | 1.9402 |
| 1.8799 | 0.2237 | 250 | 1.8919 |
| 1.8401 | 0.2685 | 300 | 1.8599 |
| 1.8376 | 0.3132 | 350 | 1.8413 |
| 1.8122 | 0.3579 | 400 | 1.8330 |
| 1.8313 | 0.4027 | 450 | 1.8319 |
| 1.7982 | 0.4474 | 500 | 1.8314 |
| 1.8176 | 0.4922 | 550 | 1.8315 |
### Framework versions
- Transformers 4.42.3
- Pytorch 2.0.0+cu117
- Datasets 2.20.0
- Tokenizers 0.19.1
|
second-state/MistralLite-7B-GGUF | second-state | 2024-07-01T03:42:51Z | 131 | 1 | transformers | [
"transformers",
"gguf",
"mistral",
"text-generation",
"base_model:amazon/MistralLite",
"base_model:quantized:amazon/MistralLite",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | text-generation | 2023-11-02T08:05:36Z | ---
base_model: amazon/MistralLite
inference: false
license: apache-2.0
model_creator: Amazon Web Services
model_name: MistralLite 7B
model_type: mistral
quantized_by: Second State Inc.
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://github.com/LlamaEdge/LlamaEdge/raw/dev/assets/logo.svg" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# MistralLite-7B-GGUF
## Original Model
[amazon/MistralLite](https://huggingface.co/amazon/MistralLite)
## Run with LlamaEdge
- LlamaEdge version: [v0.2.8](https://github.com/LlamaEdge/LlamaEdge/releases/tag/0.2.8) and above
- Prompt template
- Prompt type: `mistrallite`
- Prompt string
```text
<|prompter|>{user_message}</s><|assistant|>{assistant_message}</s>
```
- Reverse prompt: `</s>`
- Context size: `4096`
- Run as LlamaEdge service
```bash
wasmedge --dir .:. --nn-preload default:GGML:AUTO:MistralLite-Q5_K_M.gguf llama-api-server.wasm -p mistrallite -r '</s>'
```
- Run as LlamaEdge command app
```bash
wasmedge --dir .:. --nn-preload default:GGML:AUTO:MistralLite-Q5_K_M.gguf llama-chat.wasm -p mistrallite -r '</s>'
```
## Quantized GGUF Models
| Name | Quant method | Bits | Size | Use case |
| ---- | ---- | ---- | ---- | ----- |
| [MistralLite-Q2_K.gguf](https://huggingface.co/second-state/MistralLite-7B-GGUF/blob/main/MistralLite-Q2_K.gguf) | Q2_K | 2 | 2.7 GB| smallest, significant quality loss - not recommended for most purposes |
| [MistralLite-Q3_K_L.gguf](https://huggingface.co/second-state/MistralLite-7B-GGUF/blob/main/MistralLite-Q3_K_L.gguf) | Q3_K_L | 3 | 3.82 GB| small, substantial quality loss |
| [MistralLite-Q3_K_M.gguf](https://huggingface.co/second-state/MistralLite-7B-GGUF/blob/main/MistralLite-Q3_K_M.gguf) | Q3_K_M | 3 | 3.52 GB| very small, high quality loss |
| [MistralLite-Q3_K_S.gguf](https://huggingface.co/second-state/MistralLite-7B-GGUF/blob/main/MistralLite-Q3_K_S.gguf) | Q3_K_S | 3 | 3.16 GB| very small, high quality loss |
| [MistralLite-Q4_0.gguf](https://huggingface.co/second-state/MistralLite-7B-GGUF/blob/main/MistralLite-Q4_0.gguf) | Q4_0 | 4 | 4.11 GB| legacy; small, very high quality loss - prefer using Q3_K_M |
| [MistralLite-Q4_K_M.gguf](https://huggingface.co/second-state/MistralLite-7B-GGUF/blob/main/MistralLite-Q4_K_M.gguf) | Q4_K_M | 4 | 4.37 GB| medium, balanced quality - recommended |
| [MistralLite-Q4_K_S.gguf](https://huggingface.co/second-state/MistralLite-7B-GGUF/blob/main/MistralLite-Q4_K_S.gguf) | Q4_K_S | 4 | 4.14 GB| small, greater quality loss |
| [MistralLite-Q5_0.gguf](https://huggingface.co/second-state/MistralLite-7B-GGUF/blob/main/MistralLite-Q5_0.gguf) | Q5_0 | 5 | 5.00 GB| legacy; medium, balanced quality - prefer using Q4_K_M |
| [MistralLite-Q5_K_M.gguf](https://huggingface.co/second-state/MistralLite-7B-GGUF/blob/main/MistralLite-Q5_K_M.gguf) | Q5_K_M | 5 | 5.13 GB| large, very low quality loss - recommended |
| [MistralLite-Q5_K_S.gguf](https://huggingface.co/second-state/MistralLite-7B-GGUF/blob/main/MistralLite-Q5_K_S.gguf) | Q5_K_S | 5 | 5.00 GB| large, low quality loss - recommended |
| [MistralLite-Q6_K.gguf](https://huggingface.co/second-state/MistralLite-7B-GGUF/blob/main/MistralLite-Q6_K.gguf) | Q6_K | 6 | 5.94 GB| very large, extremely low quality loss |
| [MistralLite-Q8_0.gguf](https://huggingface.co/second-state/MistralLite-7B-GGUF/blob/main/MistralLite-Q8_0.gguf) | Q8_0 | 8 | 7.70 GB| very large, extremely low quality loss - not recommended |
|
RichardErkhov/NousResearch_-_Llama-2-70b-chat-hf-gguf | RichardErkhov | 2024-07-01T03:39:04Z | 19 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us"
] | null | 2024-06-29T23:58:43Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Llama-2-70b-chat-hf - GGUF
- Model creator: https://huggingface.co/NousResearch/
- Original model: https://huggingface.co/NousResearch/Llama-2-70b-chat-hf/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Llama-2-70b-chat-hf.Q2_K.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Llama-2-70b-chat-hf-gguf/blob/main/Llama-2-70b-chat-hf.Q2_K.gguf) | Q2_K | 23.71GB |
| [Llama-2-70b-chat-hf.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Llama-2-70b-chat-hf-gguf/blob/main/Llama-2-70b-chat-hf.IQ3_XS.gguf) | IQ3_XS | 26.37GB |
| [Llama-2-70b-chat-hf.IQ3_S.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Llama-2-70b-chat-hf-gguf/blob/main/Llama-2-70b-chat-hf.IQ3_S.gguf) | IQ3_S | 27.86GB |
| [Llama-2-70b-chat-hf.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Llama-2-70b-chat-hf-gguf/blob/main/Llama-2-70b-chat-hf.Q3_K_S.gguf) | Q3_K_S | 27.86GB |
| [Llama-2-70b-chat-hf.IQ3_M.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Llama-2-70b-chat-hf-gguf/blob/main/Llama-2-70b-chat-hf.IQ3_M.gguf) | IQ3_M | 28.82GB |
| [Llama-2-70b-chat-hf.Q3_K.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Llama-2-70b-chat-hf-gguf/blob/main/Llama-2-70b-chat-hf.Q3_K.gguf) | Q3_K | 30.99GB |
| [Llama-2-70b-chat-hf.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Llama-2-70b-chat-hf-gguf/blob/main/Llama-2-70b-chat-hf.Q3_K_M.gguf) | Q3_K_M | 30.99GB |
| [Llama-2-70b-chat-hf.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Llama-2-70b-chat-hf-gguf/blob/main/Llama-2-70b-chat-hf.Q3_K_L.gguf) | Q3_K_L | 33.67GB |
| [Llama-2-70b-chat-hf.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Llama-2-70b-chat-hf-gguf/blob/main/Llama-2-70b-chat-hf.IQ4_XS.gguf) | IQ4_XS | 34.64GB |
| [Llama-2-70b-chat-hf.Q4_0.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Llama-2-70b-chat-hf-gguf/blob/main/Llama-2-70b-chat-hf.Q4_0.gguf) | Q4_0 | 36.2GB |
| [Llama-2-70b-chat-hf.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Llama-2-70b-chat-hf-gguf/blob/main/Llama-2-70b-chat-hf.IQ4_NL.gguf) | IQ4_NL | 36.55GB |
| [Llama-2-70b-chat-hf.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Llama-2-70b-chat-hf-gguf/blob/main/Llama-2-70b-chat-hf.Q4_K_S.gguf) | Q4_K_S | 36.55GB |
| [Llama-2-70b-chat-hf.Q4_K.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Llama-2-70b-chat-hf-gguf/tree/main/) | Q4_K | 38.58GB |
| [Llama-2-70b-chat-hf.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Llama-2-70b-chat-hf-gguf/tree/main/) | Q4_K_M | 38.58GB |
| [Llama-2-70b-chat-hf.Q4_1.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Llama-2-70b-chat-hf-gguf/tree/main/) | Q4_1 | 40.2GB |
| [Llama-2-70b-chat-hf.Q5_0.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Llama-2-70b-chat-hf-gguf/tree/main/) | Q5_0 | 44.2GB |
| [Llama-2-70b-chat-hf.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Llama-2-70b-chat-hf-gguf/tree/main/) | Q5_K_S | 44.2GB |
| [Llama-2-70b-chat-hf.Q5_K.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Llama-2-70b-chat-hf-gguf/tree/main/) | Q5_K | 45.41GB |
| [Llama-2-70b-chat-hf.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Llama-2-70b-chat-hf-gguf/tree/main/) | Q5_K_M | 45.41GB |
| [Llama-2-70b-chat-hf.Q5_1.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Llama-2-70b-chat-hf-gguf/tree/main/) | Q5_1 | 48.2GB |
| [Llama-2-70b-chat-hf.Q6_K.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Llama-2-70b-chat-hf-gguf/tree/main/) | Q6_K | 52.7GB |
| [Llama-2-70b-chat-hf.Q8_0.gguf](https://huggingface.co/RichardErkhov/NousResearch_-_Llama-2-70b-chat-hf-gguf/tree/main/) | Q8_0 | 68.26GB |
Original model description:
---
extra_gated_heading: Access Llama 2 on Hugging Face
extra_gated_description: >-
This is a form to enable access to Llama 2 on Hugging Face after you have been
granted access from Meta. Please visit the [Meta website](https://ai.meta.com/resources/models-and-libraries/llama-downloads) and accept our
license terms and acceptable use policy before submitting this form. Requests
will be processed in 1-2 days.
extra_gated_prompt: "**Your Hugging Face account email address MUST match the email you provide on the Meta website, or your request will not be approved.**"
extra_gated_button_content: Submit
extra_gated_fields:
I agree to share my name, email address and username with Meta and confirm that I have already been granted download access on the Meta website: checkbox
language:
- en
pipeline_tag: text-generation
inference: false
tags:
- facebook
- meta
- pytorch
- llama
- llama-2
---
# **Llama 2**
Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository for the 70B fine-tuned model, optimized for dialogue use cases and converted for the Hugging Face Transformers format. Links to other models can be found in the index at the bottom.
## Model Details
*Note: Use of this model is governed by the Meta license. In order to download the model weights and tokenizer, please visit the [website](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) and accept our License before requesting access here.*
Meta developed and publicly released the Llama 2 family of large language models (LLMs), a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama-2-Chat, are optimized for dialogue use cases. Llama-2-Chat models outperform open-source chat models on most benchmarks we tested, and in our human evaluations for helpfulness and safety, are on par with some popular closed-source models like ChatGPT and PaLM.
**Model Developers** Meta
**Variations** Llama 2 comes in a range of parameter sizes — 7B, 13B, and 70B — as well as pretrained and fine-tuned variations.
**Input** Models input text only.
**Output** Models generate text only.
**Model Architecture** Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align to human preferences for helpfulness and safety.
||Training Data|Params|Content Length|GQA|Tokens|LR|
|---|---|---|---|---|---|---|
|Llama 2|*A new mix of publicly available online data*|7B|4k|✗|2.0T|3.0 x 10<sup>-4</sup>|
|Llama 2|*A new mix of publicly available online data*|13B|4k|✗|2.0T|3.0 x 10<sup>-4</sup>|
|Llama 2|*A new mix of publicly available online data*|70B|4k|✔|2.0T|1.5 x 10<sup>-4</sup>|
*Llama 2 family of models.* Token counts refer to pretraining data only. All models are trained with a global batch-size of 4M tokens. Bigger models - 70B -- use Grouped-Query Attention (GQA) for improved inference scalability.
**Model Dates** Llama 2 was trained between January 2023 and July 2023.
**Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback.
**License** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/)
## Intended Use
**Intended Use Cases** Llama 2 is intended for commercial and research use in English. Tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks.
**Out-of-scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws).Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Llama 2.
## Hardware and Software
**Training Factors** We used custom training libraries, Meta's Research Super Cluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute.
**Carbon Footprint** Pretraining utilized a cumulative 3.3M GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 539 tCO2eq, 100% of which were offset by Meta’s sustainability program.
||Time (GPU hours)|Power Consumption (W)|Carbon Emitted(tCO<sub>2</sub>eq)|
|---|---|---|---|
|Llama 2 7B|184320|400|31.22|
|Llama 2 13B|368640|400|62.44|
|Llama 2 70B|1720320|400|291.42|
|Total|3311616||539.00|
**CO<sub>2</sub> emissions during pretraining.** Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others.
## Training Data
**Overview** Llama 2 was pretrained on 2 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over one million new human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.
**Data Freshness** The pretraining data has a cutoff of September 2022, but some tuning data is more recent, up to July 2023.
## Evaluation Results
In this section, we report the results for the Llama 1 and Llama 2 models on standard academic benchmarks.For all the evaluations, we use our internal evaluations library.
|Model|Size|Code|Commonsense Reasoning|World Knowledge|Reading Comprehension|Math|MMLU|BBH|AGI Eval|
|---|---|---|---|---|---|---|---|---|---|
|Llama 1|7B|14.1|60.8|46.2|58.5|6.95|35.1|30.3|23.9|
|Llama 1|13B|18.9|66.1|52.6|62.3|10.9|46.9|37.0|33.9|
|Llama 1|33B|26.0|70.0|58.4|67.6|21.4|57.8|39.8|41.7|
|Llama 1|65B|30.7|70.7|60.5|68.6|30.8|63.4|43.5|47.6|
|Llama 2|7B|16.8|63.9|48.9|61.3|14.6|45.3|32.6|29.3|
|Llama 2|13B|24.5|66.9|55.4|65.8|28.7|54.8|39.4|39.1|
|Llama 2|70B|**37.5**|**71.9**|**63.6**|**69.4**|**35.2**|**68.9**|**51.2**|**54.2**|
**Overall performance on grouped academic benchmarks.** *Code:* We report the average pass@1 scores of our models on HumanEval and MBPP. *Commonsense Reasoning:* We report the average of PIQA, SIQA, HellaSwag, WinoGrande, ARC easy and challenge, OpenBookQA, and CommonsenseQA. We report 7-shot results for CommonSenseQA and 0-shot results for all other benchmarks. *World Knowledge:* We evaluate the 5-shot performance on NaturalQuestions and TriviaQA and report the average. *Reading Comprehension:* For reading comprehension, we report the 0-shot average on SQuAD, QuAC, and BoolQ. *MATH:* We report the average of the GSM8K (8 shot) and MATH (4 shot) benchmarks at top 1.
|||TruthfulQA|Toxigen|
|---|---|---|---|
|Llama 1|7B|27.42|23.00|
|Llama 1|13B|41.74|23.08|
|Llama 1|33B|44.19|22.57|
|Llama 1|65B|48.71|21.77|
|Llama 2|7B|33.29|**21.25**|
|Llama 2|13B|41.86|26.10|
|Llama 2|70B|**50.18**|24.60|
**Evaluation of pretrained LLMs on automatic safety benchmarks.** For TruthfulQA, we present the percentage of generations that are both truthful and informative (the higher the better). For ToxiGen, we present the percentage of toxic generations (the smaller the better).
|||TruthfulQA|Toxigen|
|---|---|---|---|
|Llama-2-Chat|7B|57.04|**0.00**|
|Llama-2-Chat|13B|62.18|**0.00**|
|Llama-2-Chat|70B|**64.14**|0.01|
**Evaluation of fine-tuned LLMs on different safety datasets.** Same metric definitions as above.
## Ethical Considerations and Limitations
Llama 2 is a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2, developers should perform safety testing and tuning tailored to their specific applications of the model.
Please see the Responsible Use Guide available at [https://ai.meta.com/llama/responsible-use-guide/](https://ai.meta.com/llama/responsible-use-guide)
## Reporting Issues
Please report any software “bug,” or other problems with the models through one of the following means:
- Reporting issues with the model: [github.com/facebookresearch/llama](http://github.com/facebookresearch/llama)
- Reporting problematic content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)
- Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info)
## Llama Model Index
|Model|Llama2|Llama2-hf|Llama2-chat|Llama2-chat-hf|
|---|---|---|---|---|
|7B| [Link](https://huggingface.co/llamaste/Llama-2-7b) | [Link](https://huggingface.co/llamaste/Llama-2-7b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat-hf)|
|13B| [Link](https://huggingface.co/llamaste/Llama-2-13b) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-13b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf)|
|70B| [Link](https://huggingface.co/llamaste/Llama-2-70b) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-70b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf)|
|
Granther/Phi3-128k-Instruct-4Bit-GPTQ | Granther | 2024-07-01T03:37:21Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"conversational",
"custom_code",
"en",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"gptq",
"region:us"
] | text-generation | 2024-06-28T05:43:45Z | ---
language:
- en
license: mit
pipeline_tag: text-generation
---
# Phi3 Mini 128k 4 Bit Quantized
****
- 4 Bit Quantized version of Microsoft's Phi3 Mini 128k: https://huggingface.co/microsoft/Phi-3-mini-128k-instruct
- Quantized the model with HuggingFace's 🤗 GPTQQuanizer
### Flash Attention
- The Phi3 family supports Flash Attenion 2, this mechanism allows for faster inference with lower resource use.
- When quantizing Phi3 on a 4090 (24G) with Flash Attention disabled Quantization would fail due to insufficient VRAM
- Enabling Flash Attention allowed Quantization to complete with an extra 10 Giagbaytes of VRAM available on the GPU
### Metrics
###### Total Size:
- Before: 7.64G
- After: 2.28G
###### VRAM Size:
- Before: 11.47G
- After: 6.57G
###### Average Inference Time:
- Before: 12ms/token
- After: 5ms/token
|
bigstorm/dolphin-2.9.2-qwen2-72b-7.0bpw-8hb-exl2 | bigstorm | 2024-07-01T03:17:58Z | 6 | 2 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"axolotl",
"conversational",
"dataset:cognitivecomputations/Dolphin-2.9",
"dataset:teknium/OpenHermes-2.5",
"dataset:m-a-p/CodeFeedback-Filtered-Instruction",
"dataset:cognitivecomputations/dolphin-coder",
"dataset:cognitivecomputations/samantha-data",
"dataset:microsoft/orca-math-word-problems-200k",
"dataset:Locutusque/function-calling-chatml",
"dataset:internlm/Agent-FLAN",
"base_model:Qwen/Qwen2-72B",
"base_model:quantized:Qwen/Qwen2-72B",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"6-bit",
"exl2",
"region:us"
] | text-generation | 2024-07-01T02:38:39Z | ---
license: other
license_name: tongyi-qianwen
license_link: >-
https://huggingface.co/Qwen/Qwen1.5-110B/blob/main/LICENSE
base_model: Qwen/Qwen2-72B
tags:
- generated_from_trainer
- axolotl
datasets:
- cognitivecomputations/Dolphin-2.9
- teknium/OpenHermes-2.5
- m-a-p/CodeFeedback-Filtered-Instruction
- cognitivecomputations/dolphin-coder
- cognitivecomputations/samantha-data
- microsoft/orca-math-word-problems-200k
- Locutusque/function-calling-chatml
- internlm/Agent-FLAN
---
# BigStorm - ExLLamaV2 (Exl2) Quantization
- 7.0 bpw target
- 8 head bits
Enjoy! Raise an issue if you'd like other BPW levels.
**Base Model Card Follows:**
---
# Dolphin 2.9.2 Qwen2 72B 🐬
Curated and trained by Eric Hartford, Lucas Atkins, and Fernando Fernandes, and Cognitive Computations
[](https://discord.gg/cognitivecomputations)
Discord: https://discord.gg/cognitivecomputations
<img src="https://cdn-uploads.huggingface.co/production/uploads/63111b2d88942700629f5771/ldkN1J0WIDQwU4vutGYiD.png" width="600" />
Our appreciation for the sponsors of Dolphin 2.9.2:
- [Crusoe Cloud](https://crusoe.ai/) - provided excellent on-demand 8xH100 node
This model is based on Qwen2-72b, and is governed by [tongyi-qianwen license](LICENSE)
The base model has 128k context, and the full-weight fine-tuning was with 8k sequence length.
This model was trained FFT on parameters selected by [Laser Scanner](https://github.com/cognitivecomputations/laserRMT/blob/main/laser_scanner.py), using ChatML prompt template format.
example:
```
<|im_start|>system
You are Dolphin, a helpful AI assistant.<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
Dolphin-2.9.2 has a variety of instruction, conversational, and coding skills. It also has initial agentic abilities and supports function calling.
Dolphin is uncensored. We have filtered the dataset to remove alignment and bias. This makes the model more compliant. You are advised to implement your own alignment layer before exposing the model as a service. It will be highly compliant with any requests, even unethical ones. Please read my blog post about uncensored models. https://erichartford.com/uncensored-models You are responsible for any content you create using this model. Enjoy responsibly.
Dolphin is licensed according to Qwen's tongyi-qianwen license. We grant permission for any use, including commercial, that falls within accordance with said license. Dolphin was trained on data generated from GPT4, among other models.
## Evals

[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.0`
```yaml
base_model: Qwen/Qwen2-72B
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer
trust_remote_code: true
# load_in_8bit: true
# load_in_4bit: false
# strict: false
datasets:
- path: /workspace/datasets/dolphin-2.9.2/dolphin201-sharegpt2.jsonl
type: sharegpt
conversation: chatml
- path: /workspace/datasets/dolphin-2.9.2/dolphin-coder-codegen-sharegpt2.jsonl
type: sharegpt
conversation: chatml
- path: /workspace/datasets/dolphin-2.9.2/dolphin-coder-translate-sharegpt2.jsonl
type: sharegpt
conversation: chatml
- path: /workspace/datasets/dolphin-2.9.2/m-a-p_Code-Feedback-sharegpt-unfiltered.jsonl
type: sharegpt
conversation: chatml
- path: /workspace/datasets/dolphin-2.9.2/m-a-p_CodeFeedback-Filtered-Instruction-sharegpt-unfiltered.jsonl
type: sharegpt
conversation: chatml
- path: /workspace/datasets/dolphin-2.9.2/not_samantha_norefusals.jsonl
type: sharegpt
conversation: chatml
- path: /workspace/datasets/dolphin-2.9.2/openhermes200k_unfiltered.jsonl
type: sharegpt
conversation: chatml
- path: /workspace/datasets/dolphin-2.9.2/Orca-Math-resort-unfiltered.jsonl
type: sharegpt
conversation: chatml
- path: /workspace/datasets/dolphin-2.9.2/SystemChat_sharegpt.jsonl
type: sharegpt
conversation: chatml
- path: /workspace/datasets/dolphin-2.9.2/toolbench_instruct_j1s1_3k_unfiltered.jsonl
type: sharegpt
conversation: chatml
- path: /workspace/datasets/dolphin-2.9.2/toolbench_negative_unfiltered.jsonl
type: sharegpt
conversation: chatml
- path: /workspace/datasets/dolphin-2.9.2/toolbench_react_10p_unfiltered.jsonl
type: sharegpt
conversation: chatml
- path: /workspace/datasets/dolphin-2.9.2/toolbench_tflan_cot_30p_unfiltered.jsonl
type: sharegpt
conversation: chatml
- path: /workspace/datasets/dolphin-2.9.2/agent_instruct_react_unfiltered.jsonl
type: sharegpt
conversation: chatml
unfrozen_parameters:
- ^lm_head.weight$
- ^model.embed_tokens.weight$
# mlp.down_proj layers
- model.layers.62.mlp.down_proj
- model.layers.63.mlp.down_proj
- model.layers.66.mlp.down_proj
- model.layers.65.mlp.down_proj
- model.layers.64.mlp.down_proj
- model.layers.67.mlp.down_proj
- model.layers.68.mlp.down_proj
- model.layers.60.mlp.down_proj
- model.layers.31.mlp.down_proj
- model.layers.69.mlp.down_proj
- model.layers.61.mlp.down_proj
- model.layers.59.mlp.down_proj
- model.layers.70.mlp.down_proj
- model.layers.30.mlp.down_proj
- model.layers.76.mlp.down_proj
- model.layers.72.mlp.down_proj
- model.layers.77.mlp.down_proj
- model.layers.71.mlp.down_proj
- model.layers.29.mlp.down_proj
- model.layers.58.mlp.down_proj
- model.layers.75.mlp.down_proj
- model.layers.32.mlp.down_proj
- model.layers.56.mlp.down_proj
- model.layers.28.mlp.down_proj
- model.layers.26.mlp.down_proj
- model.layers.33.mlp.down_proj
- model.layers.34.mlp.down_proj
- model.layers.57.mlp.down_proj
- model.layers.27.mlp.down_proj
- model.layers.25.mlp.down_proj
- model.layers.35.mlp.down_proj
- model.layers.73.mlp.down_proj
- model.layers.24.mlp.down_proj
- model.layers.78.mlp.down_proj
- model.layers.74.mlp.down_proj
- model.layers.54.mlp.down_proj
# mlp.gate_proj layers
- model.layers.78.mlp.gate_proj
- model.layers.77.mlp.gate_proj
- model.layers.76.mlp.gate_proj
- model.layers.79.mlp.gate_proj
- model.layers.75.mlp.gate_proj
- model.layers.74.mlp.gate_proj
- model.layers.73.mlp.gate_proj
- model.layers.70.mlp.gate_proj
- model.layers.72.mlp.gate_proj
- model.layers.71.mlp.gate_proj
- model.layers.69.mlp.gate_proj
- model.layers.54.mlp.gate_proj
- model.layers.68.mlp.gate_proj
- model.layers.57.mlp.gate_proj
- model.layers.63.mlp.gate_proj
- model.layers.49.mlp.gate_proj
- model.layers.55.mlp.gate_proj
- model.layers.53.mlp.gate_proj
- model.layers.44.mlp.gate_proj
- model.layers.46.mlp.gate_proj
- model.layers.67.mlp.gate_proj
- model.layers.58.mlp.gate_proj
- model.layers.56.mlp.gate_proj
- model.layers.45.mlp.gate_proj
- model.layers.50.mlp.gate_proj
- model.layers.62.mlp.gate_proj
- model.layers.64.mlp.gate_proj
- model.layers.48.mlp.gate_proj
- model.layers.66.mlp.gate_proj
- model.layers.52.mlp.gate_proj
- model.layers.40.mlp.gate_proj
- model.layers.47.mlp.gate_proj
- model.layers.43.mlp.gate_proj
- model.layers.65.mlp.gate_proj
- model.layers.61.mlp.gate_proj
- model.layers.59.mlp.gate_proj
# mlp.up_proj layers
- model.layers.69.mlp.up_proj
- model.layers.70.mlp.up_proj
- model.layers.71.mlp.up_proj
- model.layers.68.mlp.up_proj
- model.layers.67.mlp.up_proj
- model.layers.66.mlp.up_proj
- model.layers.46.mlp.up_proj
- model.layers.63.mlp.up_proj
- model.layers.72.mlp.up_proj
- model.layers.64.mlp.up_proj
- model.layers.62.mlp.up_proj
- model.layers.45.mlp.up_proj
- model.layers.65.mlp.up_proj
- model.layers.73.mlp.up_proj
- model.layers.47.mlp.up_proj
- model.layers.44.mlp.up_proj
- model.layers.49.mlp.up_proj
- model.layers.48.mlp.up_proj
- model.layers.53.mlp.up_proj
- model.layers.74.mlp.up_proj
- model.layers.75.mlp.up_proj
- model.layers.57.mlp.up_proj
- model.layers.76.mlp.up_proj
- model.layers.43.mlp.up_proj
- model.layers.42.mlp.up_proj
- model.layers.61.mlp.up_proj
- model.layers.40.mlp.up_proj
- model.layers.56.mlp.up_proj
- model.layers.60.mlp.up_proj
- model.layers.31.mlp.up_proj
- model.layers.54.mlp.up_proj
- model.layers.55.mlp.up_proj
- model.layers.32.mlp.up_proj
- model.layers.41.mlp.up_proj
- model.layers.33.mlp.up_proj
- model.layers.58.mlp.up_proj
# self_attn.k_proj layers
- model.layers.79.self_attn.k_proj
- model.layers.36.self_attn.k_proj
- model.layers.35.self_attn.k_proj
- model.layers.74.self_attn.k_proj
- model.layers.34.self_attn.k_proj
- model.layers.78.self_attn.k_proj
- model.layers.77.self_attn.k_proj
- model.layers.37.self_attn.k_proj
- model.layers.39.self_attn.k_proj
- model.layers.41.self_attn.k_proj
- model.layers.38.self_attn.k_proj
- model.layers.33.self_attn.k_proj
- model.layers.69.self_attn.k_proj
- model.layers.42.self_attn.k_proj
- model.layers.32.self_attn.k_proj
- model.layers.25.self_attn.k_proj
- model.layers.70.self_attn.k_proj
- model.layers.22.self_attn.k_proj
- model.layers.63.self_attn.k_proj
- model.layers.29.self_attn.k_proj
- model.layers.68.self_attn.k_proj
- model.layers.24.self_attn.k_proj
- model.layers.30.self_attn.k_proj
- model.layers.66.self_attn.k_proj
- model.layers.31.self_attn.k_proj
- model.layers.23.self_attn.k_proj
- model.layers.65.self_attn.k_proj
- model.layers.57.self_attn.k_proj
- model.layers.28.self_attn.k_proj
- model.layers.64.self_attn.k_proj
- model.layers.44.self_attn.k_proj
- model.layers.27.self_attn.k_proj
- model.layers.75.self_attn.k_proj
- model.layers.40.self_attn.k_proj
- model.layers.26.self_attn.k_proj
- model.layers.61.self_attn.k_proj
# self_attn.o_proj layers
- model.layers.14.self_attn.o_proj
- model.layers.39.self_attn.o_proj
- model.layers.19.self_attn.o_proj
- model.layers.16.self_attn.o_proj
- model.layers.17.self_attn.o_proj
- model.layers.15.self_attn.o_proj
- model.layers.69.self_attn.o_proj
- model.layers.12.self_attn.o_proj
- model.layers.42.self_attn.o_proj
- model.layers.23.self_attn.o_proj
- model.layers.22.self_attn.o_proj
- model.layers.29.self_attn.o_proj
- model.layers.13.self_attn.o_proj
- model.layers.46.self_attn.o_proj
- model.layers.52.self_attn.o_proj
- model.layers.26.self_attn.o_proj
- model.layers.38.self_attn.o_proj
- model.layers.41.self_attn.o_proj
- model.layers.18.self_attn.o_proj
- model.layers.49.self_attn.o_proj
- model.layers.11.self_attn.o_proj
- model.layers.28.self_attn.o_proj
- model.layers.25.self_attn.o_proj
- model.layers.47.self_attn.o_proj
- model.layers.53.self_attn.o_proj
- model.layers.27.self_attn.o_proj
- model.layers.37.self_attn.o_proj
- model.layers.20.self_attn.o_proj
- model.layers.43.self_attn.o_proj
- model.layers.44.self_attn.o_proj
- model.layers.45.self_attn.o_proj
- model.layers.30.self_attn.o_proj
- model.layers.24.self_attn.o_proj
- model.layers.21.self_attn.o_proj
- model.layers.10.self_attn.o_proj
- model.layers.3.self_attn.o_proj
# self_attn.q_proj layers
- model.layers.1.self_attn.q_proj
- model.layers.2.self_attn.q_proj
- model.layers.3.self_attn.q_proj
- model.layers.5.self_attn.q_proj
- model.layers.4.self_attn.q_proj
- model.layers.0.self_attn.q_proj
- model.layers.6.self_attn.q_proj
- model.layers.8.self_attn.q_proj
- model.layers.7.self_attn.q_proj
- model.layers.9.self_attn.q_proj
- model.layers.10.self_attn.q_proj
- model.layers.12.self_attn.q_proj
- model.layers.19.self_attn.q_proj
- model.layers.18.self_attn.q_proj
- model.layers.25.self_attn.q_proj
- model.layers.11.self_attn.q_proj
- model.layers.15.self_attn.q_proj
- model.layers.61.self_attn.q_proj
- model.layers.17.self_attn.q_proj
- model.layers.55.self_attn.q_proj
- model.layers.54.self_attn.q_proj
- model.layers.16.self_attn.q_proj
- model.layers.68.self_attn.q_proj
- model.layers.49.self_attn.q_proj
- model.layers.48.self_attn.q_proj
- model.layers.52.self_attn.q_proj
- model.layers.13.self_attn.q_proj
- model.layers.42.self_attn.q_proj
- model.layers.57.self_attn.q_proj
- model.layers.60.self_attn.q_proj
- model.layers.53.self_attn.q_proj
- model.layers.64.self_attn.q_proj
- model.layers.66.self_attn.q_proj
- model.layers.62.self_attn.q_proj
- model.layers.59.self_attn.q_proj
- model.layers.50.self_attn.q_proj
# self_attn.v_proj layers
- model.layers.15.self_attn.v_proj
- model.layers.16.self_attn.v_proj
- model.layers.23.self_attn.v_proj
- model.layers.24.self_attn.v_proj
- model.layers.25.self_attn.v_proj
- model.layers.26.self_attn.v_proj
- model.layers.27.self_attn.v_proj
- model.layers.28.self_attn.v_proj
- model.layers.29.self_attn.v_proj
- model.layers.30.self_attn.v_proj
- model.layers.31.self_attn.v_proj
- model.layers.32.self_attn.v_proj
- model.layers.33.self_attn.v_proj
- model.layers.34.self_attn.v_proj
- model.layers.35.self_attn.v_proj
- model.layers.36.self_attn.v_proj
- model.layers.37.self_attn.v_proj
- model.layers.38.self_attn.v_proj
- model.layers.39.self_attn.v_proj
- model.layers.41.self_attn.v_proj
- model.layers.42.self_attn.v_proj
- model.layers.48.self_attn.v_proj
- model.layers.53.self_attn.v_proj
- model.layers.57.self_attn.v_proj
- model.layers.58.self_attn.v_proj
- model.layers.59.self_attn.v_proj
- model.layers.61.self_attn.v_proj
- model.layers.63.self_attn.v_proj
- model.layers.64.self_attn.v_proj
- model.layers.65.self_attn.v_proj
- model.layers.66.self_attn.v_proj
- model.layers.69.self_attn.v_proj
- model.layers.74.self_attn.v_proj
- model.layers.75.self_attn.v_proj
- model.layers.76.self_attn.v_proj
- model.layers.72.self_attn.v_proj
chat_template: chatml
dataset_prepared_path: qwen2-72b-data
val_set_size: 0.01
output_dir: qwen2-72b
sequence_len: 8192 # supports up to 8192
sample_packing: true
pad_to_sequence_len: true
# adapter: lora
# lora_model_dir:
# lora_r: 32
# lora_alpha: 16
# lora_dropout: 0.05
# lora_target_linear: true
# lora_fan_in_fan_out:
wandb_project: qwen2-72b
wandb_entity:
wandb_watch:
wandb_name:
wandb_log_model:
gradient_accumulation_steps: 8
micro_batch_size: 1
num_epochs: 3
optimizer: paged_adamw_8bit
lr_scheduler: cosine
learning_rate: 1e-5
train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: false
gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
warmup_steps: 10
evals_per_epoch: 2
eval_table_size:
eval_max_new_tokens: 128
saves_per_epoch: 4
save_total_limit: 2
debug:
deepspeed: /workspace/axolotl/deepspeed_configs/zero3_bf16_cpuoffload_params.json
weight_decay: 0.05
fsdp:
fsdp_config:
special_tokens:
pad_token: "<|endoftext|>"
eos_token: "<|im_end|>"
```
|
yulan-team/YuLan-Base-12b | yulan-team | 2024-07-01T03:04:54Z | 16 | 3 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:2406.19853",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-06-27T15:17:43Z | ---
license: mit
---
<div align=center>
<h1>YuLan-Chat: An Open-Source Bilingual Chatbot</h1>
</div>
YuLan-Chat models are chat-based large language models, which are developed by the researchers in GSAI, Renmin University of China (YuLan, which represents Yulan Magnolia, is the campus flower of Renmin University of China). The newest version is developed by pre-training from scratch, and supervised fine-tuning via curriculum learning with high-quality English and Chinese instructions and human preference data. The model has the following technical characteristics:
- Owing to large-scale pre-training on high-quality English, Chinese, and multilingual data, the language ability of the model has been improved.
- Owing to the curriculum learning strategy for human alignment, the helpfulness, honesty, and harmlessness of our model have been enhanced.
- To well support Chinese longer inputs and outputs, we expand the vocabulary with Chinese words and the maximum input length. It can support 4k context now.
> YuLan-Chat系列模型是中国人民大学高瓴人工智能学院师生共同开发的支持聊天的大语言模型(名字"玉兰"取自中国人民大学校花)。最新版本从头完成了整个预训练过程,并采用课程学习技术基于中英文双语数据进行有监督微调,包括高质量指令和人类偏好数据。该版模型具有如下技术特点:
> - 由于在大规模中英双语数据上进行了继续预训练,模型的语言能力得到提高;
> - 由于采用了课程学习方法进行人类对齐训练,模型在真实场景下的有用性、诚实性与无害性得到了增强;
> - 为了更好的支持中文和更长的输入输出,模型的词表及长度得到了扩充,目前可支持4k上下文。
## News
* **\[July. 1, 2024\]** We release **YuLan-Base-12B**, an LLM trained from scratch, and its chat-based version **YuLan-Chat-3-12B**. We pre-train the base model on over 1.6TB tokens of English, Chinese, and multilingual data, and then perform supervised fine-tuning via curriculum learning with high-quality English and Chinese instructions and human preference data to obtain the chat model.
* **\[Aug. 18, 2023\]** Our **YuLan-Chat-2-13B** achieves the 5th position of [OpenCompass](https://opencompass.org.cn/leaderboard-llm) benchmark!
* **\[Aug. 02, 2023\]** We release **YuLan-LLaMA-2-13B** and **YuLan-Chat-2-13B**. Both models have been continually pre-trained on English and Chinese corpus based on LLaMA-2, and YuLan-Chat-2-13B is the chat-based LLM based on YuLan-LLaMA-2-13B, with high-quality English and Chinese instructions.
* **\[Aug. 02, 2023\]** We release **YuLan-Chat-1-65B-v2**, a chat-based LLM based on LLaMA. It has been continually pre-trained on English and Chinese corpus, and then instruction-tuned with high-quality English and Chinese instructions.
* **\[Jun. 08, 2023\]** We release **YuLan-Chat-1-13B-v1** and **YuLan-Chat-1-65B-v1**, and the corresponding INT-8 quantization scripts.
> * **\[2024年7月1日\]** 我们发布了**YuLan-Base-12B**,一个完全从头训练的Base模型,以及其Chat化版本**YuLan-Chat-3-12B**。我们在超过1.6TB词元的中、英文和多语数据上进行了大规模预训练,得到了Base模型,然后基于高质量双语指令和人类偏好数据,使用课程学习方法进行有监督微调,最终得到了Chat化的版本。
> * **\[2023年8月2日\]** 我们发布了**YuLan-LLaMA-2-13B**和**YuLan-Chat-2-13B**两个模型,其都在LLaMA-2的基础上进行了双语继续预训练,YuLan-Chat-2-13B在YuLan-LLaMA-2-13B基础上进行了双语高质量对话指令微调。
> * **\[2023年8月2日\]** 我们发布了**YuLan-Chat-1-65B-v2**模型,其在LLaMA-65B的基础上进行了双语继续预训练,然后用高质量双语指令进行了微调。
> * **\[2023年6月8日\]** 我们发布了**YuLan-Chat-1-13B-v1**和**YuLan-Chat-1-65B-v1**两个模型,以及对应的int8量化脚本。
## Model Zoo
Due to the license limitation, for models based on LLaMA, we only provide the weight difference with the original checkpoints; for models based on LLaMA-2, they can be used directly. Please check the [Usage](https://github.com/RUC-GSAI/YuLan-Chat/tree/main#usage) section for more details.
**Limitations**: Despite our efforts to reduce potential security issues during the model's usage and encourage the generation of text that aligns with ethical and legal requirements, the language model is based on probabilistic generation, which means it may still produce unexpected outputs. For instance, the generated responses may contain biases, discrimination, or other harmful content. Please do not propagate such content. We do not assume any responsibility for any consequences resulting from the dissemination of harmful information.
> 由于许可证的限制,基于LLaMA的模型我们仅提供与官方模型的差值,基于LLaMA-2的模型可直接使用,具体请参见使用方法章节。
> **局限性**:尽管我们尝试减少模型在使用中可能出现的安全性问题,并鼓励模型生成符合道德和法律要求的文本,但由于语言模型基于概率生成的范式,模型仍然可能会产生意外的输出。 例如,生成的响应可能包含偏见、歧视或其他有害内容。 请不要传播此类内容。 我们对因传播有害信息而造成的任何后果不承担任何责任。
| Model | Backbone | Extended Vocab | Extended Length | Continue PT | SFT | Released Date |
| ------------------- | :--------: | :------------: | :-------------: | :---------: | ---- | :-----------: |
| [YuLan-Base-12B](https://huggingface.co/yulan-team/YuLan-Base-12b) | YuLan-Base-12B | ✅ 51,190 | ✅ 4,096 | ❌ | ❌ | 2024.7.1 |
| [YuLan-Chat-3-12B](https://huggingface.co/yulan-team/YuLan-Chat-3-12b) | YuLan-Base-12B | ✅ 51,190 | ✅ 4,096 | ❌ | ✅ | 2024.7.1 |
| [YuLan-Chat-2-13B](https://huggingface.co/yulan-team/YuLan-Chat-2-13b-fp16) | LLaMA2-13B | ✅ 51,190 | ✅ 8,192 | ✅ | ✅ | 2023.8.2 |
| [YuLan-LLaMA-2-13B](https://huggingface.co/yulan-team/YuLan-LLaMA-2-13b) | LLaMA2-13B | ✅ 51,190 | ✅ 8,192 | ✅ | ❌ | 2023.8.2 |
| [YuLan-Chat-1-65B-v2](https://huggingface.co/yulan-team/YuLan-Chat-1-65B-v2-delta) | LLaMA-65B | ✅ 51,190 | ❌ 2,048 | ✅ | ✅ | 2023.8.2 |
| [YuLan-Chat-1-13B-v1](https://huggingface.co/RUCAIBox/YuLan-Chat-13b-delta) | LLaMA-13B | ❌ 32,000 | ❌ 2,048 | ❌ | ✅ | 2023.6.8 |
| [YuLan-Chat-1-65B-v1](https://huggingface.co/RUCAIBox/YuLan-Chat-65b-delta) | LLaMA-65B | ❌ 32,000 | ❌ 2,048 | ❌ | ✅ | 2023.6.8 |
## Evaluation
We evaluate our YuLan-Chat model on several Chinese and English benchmarks. The evaluation results are shown as follows.
> 我们在中英文的一些基准测试上对YuLan-Chat进行了评价,其结果如下。
### MMLU
[MMLU](https://github.com/hendrycks/test) (Massive Multitask Language Understanding) is a benchmark designed to measure knowledge acquired during pretraining by evaluating models exclusively in zero-shot and few-shot settings.
> MMLU是一个评估模型知识量的常用的英文基准测试集。
| Model | STEM | Social Science | Humanities | Others | Avg. |
| --------------------------------- | :--: | :------------: | :--------: | :----: | :--: |
| YuLan-Chat-1-13B-v1 | 39.6 | 57.8 | 42.6 | 57.6 | 49.4 |
| YuLan-Chat-1-65B-v1 | 49.2 | 71.7 | 57.7 | 66.7 | 61.3 |
| YuLan-Chat-1-65B-v2 | 46.3 | 67.9 | 56.9 | 63.9 | 58.7 |
| LLaMA-2-13B | 44.6 | 64.2 | 53.9 | 62.2 | 56.2 |
| FlagAlpha/Llama2-Chinese-13b-Chat | 44.4 | 63.2 | 51.6 | 60.6 | 55.0 |
| Linly-AI/Chinese-LLaMA-2-13B-hf | 43.6 | 62.7 | 49.8 | 61.6 | 54.4 |
| YuLan-LLaMA-2-13B | 42.9 | 61.5 | 50.4 | 58.6 | 53.4 |
| YuLan-Chat-2-13B | 45.3 | 66.7 | 53.8 | 62.8 | 57.2 |
| YuLan-Base-12B | 42.3 | 60.2 | 46.4 | 56.1 | 51.3 |
| YuLan-Chat-3-12B | 45.5 | 64.3 | 51.8 | 61.3 | 55.7 |
### C-Eval
[C-Eval](https://cevalbenchmark.com/) is a comprehensive Chinese evaluation suite for foundation models.
> C-Eval是一个针对基石模型综合能力的中文基准测试集。
| Model | STEM | Social Science | Humanities | Others | Avg. | Avg. (Hard) |
| --------------------------------- | :--: | :------------: | :--------: | :----: | :--: | :---------: |
| YuLan-Chat-1-13B-v1 | 30.2 | 37.4 | 31.9 | 30.7 | 32.0 | 25.7 |
| YuLan-Chat-1-65B-v1 | 37.7 | 46.1 | 36.8 | 38.0 | 39.2 | 31.1 |
| YuLan-Chat-1-65B-v2 | 39.9 | 55.9 | 47.7 | 43.7 | 45.4 | 31.4 |
| LLaMA-2-13B | 36.9 | 43.2 | 37.6 | 36.6 | 38.2 | 32.0 |
| FlagAlpha/Llama2-Chinese-13b-Chat | 36.8 | 44.5 | 36.3 | 36.5 | 38.1 | 30.9 |
| Linly-AI/Chinese-LLaMA-2-13B-hf | 33.7 | 44.8 | 36.6 | 36.5 | 37.0 | 27.7 |
| YuLan-LLaMA-2-13B | 35.3 | 46.4 | 41.9 | 37.6 | 39.3 | 28.6 |
| YuLan-Chat-2-13B | 38.9 | 49.7 | 45.0 | 40.8 | 42.6 | 32.2 |
| YuLan-Base-12B | 42.0 | 57.6 | 47.2 | 41.5 | 46.0 | 32.6 |
| YuLan-Chat-3-12B | 47.0 | 61.8 | 52.9 | 44.3 | 50.5 | 37.7 |
### AGI-Eval-Gaokao
[AGI-Eval](https://github.com/microsoft/AGIEval) is a human-centric benchmark specifically designed to evaluate the general abilities of foundation models in tasks pertinent to human cognition and problem-solving. We use the sub-branch Chinese-Gaokao for evaluation.
> AGI-Eval 是一个以人为中心的基准,专门设计用于评估基础模型在与人类认知和解决问题相关的任务中的一般能力。我们使用其中的"高考"分支进行评测。
| Model | Avg. | Chinese | English | Geography | History | Biology | Chemistry | Physics | Math-QA | Math-Cloze |
| --------------------------------- | :--: | :-----: | :-----: | :-------: | :-----: | :-----: | :-------: | :-----: | :-----: | :--------: |
| YuLan-Chat-1-13B-v1 | 29.2 | 32.1 | 63.1 | 34.7 | 25.1 | 26.2 | 29.0 | 25.5 | 26.5 | 0.9 |
| YuLan-Chat-1-65B-v1 | 34.6 | 24.8 | 82.0 | 44.2 | 44.3 | 31.4 | 30.9 | 26.0 | 27.1 | 0.9 |
| YuLan-Chat-1-65B-v2 | 37.9 | 31.4 | 80.4 | 50.8 | 56.6 | 33.3 | 29.0 | 32.0 | 24.4 | 0.8 |
| LLaMA-2-13B | 32.7 | 27.2 | 72.2 | 36.2 | 43.0 | 26.2 | 32.4 | 30.0 | 26.2 | 0.9 |
| FlagAlpha/Llama2-Chinese-13b-Chat | 31.6 | 26.4 | 70.6 | 35.2 | 38.7 | 28.1 | 28.0 | 29.5 | 25.6 | 2.5 |
| Linly-AI/Chinese-LLaMA-2-13B-hf | 31.1 | 22.8 | 74.8 | 42.2 | 37.9 | 24.3 | 28.0 | 23.0 | 26.5 | 0.0 |
| YuLan-LLaMA-2-13B | 34.2 | 25.2 | 70.3 | 43.2 | 48.5 | 30.0 | 29.5 | 31.0 | 28.5 | 1.7 |
| YuLan-Chat-2-13B | 39.5 | 37.0 | 85.3 | 46.7 | 51.9 | 43.8 | 38.2 | 29.0 | 23.1 | 0.9 |
| YuLan-Chat-3-12B | 43.5 | 31.3 | 68.3 | 53.3 | 60.9 | 43.8 | 34.8 | 27.5 | 28.2 | 0.9 |
| YuLan-Chat-3-12B | 49.5 | 43.9 | 80.4 | 57.3 | 69.4 | 53.8 | 37.7 | 27.0 | 26.2 | 0.9 |
## Usage
### Environment Setting
```
conda create -n yulan python=3.10 -y
conda activate yulan
```
We suggest to install the pytorch and bitsandbytes according to their official guidance for better adapting to your environment, and we provide our applied versions as reference:
> 我们建议根据官方手册安装pytorch和bitsandbytes,此处提供我们使用的版本作为参考。
```
torch==1.13
bitsandbytes==0.39.0
```
Then, you can install other packages by the following instruction:
> 然后,安装其他所需的包。
```
pip install -r requirements.txt
```
### Model Weights Recovering
1. For YuLan-Chat-1-13B-v1, YuLan-Chat-1-65B-v1, and YuLan-Chat-1-65B-v2, as they are based on LLaMA, you should download [LLaMA](https://github.com/facebookresearch/llama)'s original weights, and then add our released delta parameters into the original parameters to compose the final model parameters.
> 对于基于LLaMA的模型,请先下载LLaMA官方模型,然后将我们发布的参数差值合并到原始模型参数中以获得最终的参数。
```
python3 apply_delta.py \
--base-model-path ./llama-13b/ \
--tuned-model-path ./yulan-13b/ \
--delta-path ./yulan-13b-delta
```
2. For YuLan-LLaMA-2-13B and YuLan-Chat-2-13B, you can just download our released checkpoints and load their parameters via Huggingface Transformers.
> 对于基于LLaMA-2的模型,可以直接下载我们发布的模型权重,并使用Huggingface Transformers进行使用。
### Import from Huggingface Transformers
As our model is trained based on LLaMA, it can be loaded in the same way as original LLaMA.
> 由于我们的模型与LLaMA具有相似的结构,可以使用与LLaMA相同的方法加载。
```Python
>>> from transformers import AutoTokenizer, AutoModelForCausalLM
>>> tokenizer = AutoTokenizer.from_pretrained("yulan-team/YuLan-Chat-3-12b")
>>> model = AutoModelForCausalLM.from_pretrained("yulan-team/YuLan-Chat-3-12b").cuda()
>>> model = model.eval()
>>> input_text = "hello"
>>> prompt = "The following is a conversation between a human and an AI assistant namely YuLan, developed by GSAI, Renmin University of China. The AI assistant gives helpful, detailed, and polite answers to the user's questions.\n[|Human|]:{}\n[|AI|]:".format(input_text)
>>> inputs = tokenizer(prompt, return_tensors='pt', padding="longest", max_length=4096, truncation=True, return_attention_mask=True, add_special_tokens=True)
>>> kwargs = {'temperature': 0.8, 'top_p': 0.95, "top_k": 50, "repetition_penalty": 1.1, "no_repeat_ngram_size": 64, "max_length": 4096, "pad_token_id": tokenizer.bos_token_id, "eos_token_id": tokenizer.eos_token_id}
>>> outputs = model.generate(inputs['input_ids'].to(model.device), attention_mask=inputs['attention_mask'].to(model.device), do_sample=True, **kwargs)
>>> print(tokenizer.batch_decode(outputs, skip_special_tokens=True)[len(prompt):])
```
### Inference in Command Line
We provide the code for the inference of YuLan-Chat in command line.
> 我们提供命令行预测脚本。
```
python inference.py --model_path ~/pretrain-checkpoint/yulan-13b/
```
We also provide a quantization way for efficiently deploying YuLan-Chat. After quantization, YuLan-Chat can be loaded into a single GPU.
> 我们也提供了一种量化的方法以便于更轻量化地部署YuLan-Chat。经过量化后,模型可以被加载进单张GPU中。
|YuLan-Chat (INT-8)| GPU Consumption |
|------------------|-----------------|
|13B| RTX3090-24G |
|65B| A100-80G |
```
python inference.py --model_path ~/pretrain-checkpoint/yulan-13b/ --load_in_8bit
```
## License
YuLan-Chat uses [MIT License](https://github.com/RUC-GSAI/YuLan-Chat/blob/main/LICENSE). All data and code in this project can only be used for academic purposes.
> 本项目使用MIT许可,所有的数据和代码仅供学术研究使用。
## Contributors
| **Pre-training** | **Fine-tuning** |
|:----------------------------- |:-------------------------------------------------------------------- |
| [Yutao Zhu](https://github.com/DaoD) (Lead), [Kelong Mao](https://github.com/kyriemao), [Wentong Chen](https://github.com/yiye3), [Yiding Sun](https://github.com/Emanual20), [Yihan Wu](https://github.com/wyh2000), [Qian Cao](https://github.com/Aman-4-Real), [Lei Zhang](https://github.com/LLily0703), [Feng Wang](https://github.com/PhealenWang), [Qiangqiang Ren](https://github.com/QiangKing)| [Kun Zhou](https://github.com/Lancelot39) (Lead), [Yushuo Chen](https://github.com/chenyushuo), [Zhipeng Chen](https://github.com/Timothy023), [Lei Wang](https://github.com/Paitesanshi), [Yupeng Hou](https://github.com/hyp1231), [Xincheng Pang](https://github.com/pangxincheng), [Xinyu Tang](https://github.com/txy77), [Junyi Li](https://github.com/turboLJY), [Yuhan Chen](https://github.com/Fiorina1212), [Shufang Xie](https://github.com/funtion) |
## Reference
Please kindly cite our work if it helps you.
> 如果我们的项目对您有帮助,请引用我们,谢谢!
```BibTeX
@article{yulan,
author = {Yutao Zhu and
Kun Zhou and
Kelong Mao and
Wentong Chen and
Yiding Sun and
Zhipeng Chen and
Qian Cao and
Yihan Wu and
Yushuo Chen and
Feng Wang and
Lei Zhang and
Junyi Li and
Xiaolei Wang and
Lei Wang and
Beichen Zhang and
Zican Dong and
Xiaoxue Cheng and
Yuhan Chen and
Xinyu Tang and
Yupeng Hou and
Qiangqiang Ren and
Xincheng Pang and
Shufang Xie and
Wayne Xin Zhao and
Zhicheng Dou and
Jiaxin Mao and
Yankai Lin and
Ruihua Song and
Jun Xu and
Xu Chen and
Rui Yan and
Zhewei Wei and
Di Hu and
Wenbing Huang and
Ze-Feng Gao and
Yueguo Chen and
Weizheng Lu and
Ji-Rong Wen},
title = {YuLan: An Open-source Large Language Model},
journal = {CoRR},
volume = {abs/2406.19853},
year = {2024},
url = {https://doi.org/10.48550/arXiv.2406.19853},
doi = {10.48550/ARXIV.2406.19853},
eprinttype = {arXiv},
eprint = {2406.19853}
}
```
## YuLan-1
You can refer to our [original branch](https://github.com/RUC-GSAI/YuLan-Chat/tree/YuLan-Chat-1) for more detail about YuLan-Chat-1 and the instruction collection.
> 更多关于指令构造的细节,可以参考我们之前的分支。
## Star History
<!-- <source media="(prefers-color-scheme: dark)" srcset="https://api.star-history.com/svg?repos=RUC-GSAI/YuLan-Chat&type=Date&theme=dark" /> -->
<img alt="Star History Chart" src="https://api.star-history.com/svg?repos=RUC-GSAI/YuLan-Chat&type=Date" /> |
xu1998hz/instructscore_en-es | xu1998hz | 2024-07-01T03:02:09Z | 18 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-04-06T17:36:28Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
skyxiaobaibai/gemma-2-9b-it-Q4_0-GGUF | skyxiaobaibai | 2024-07-01T02:17:31Z | 5 | 0 | transformers | [
"transformers",
"gguf",
"conversational",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"base_model:google/gemma-2-9b-it",
"base_model:quantized:google/gemma-2-9b-it",
"license:gemma",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-07-01T02:17:08Z | ---
base_model: google/gemma-2-9b-it
library_name: transformers
license: gemma
pipeline_tag: text-generation
tags:
- conversational
- llama-cpp
- gguf-my-repo
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: To access Gemma on Hugging Face, you’re required to review and
agree to Google’s usage license. To do this, please ensure you’re logged in to Hugging
Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
---
# skyxiaobaibai/gemma-2-9b-it-Q4_0-GGUF
This model was converted to GGUF format from [`google/gemma-2-9b-it`](https://huggingface.co/google/gemma-2-9b-it) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/google/gemma-2-9b-it) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo skyxiaobaibai/gemma-2-9b-it-Q4_0-GGUF --hf-file gemma-2-9b-it-q4_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo skyxiaobaibai/gemma-2-9b-it-Q4_0-GGUF --hf-file gemma-2-9b-it-q4_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo skyxiaobaibai/gemma-2-9b-it-Q4_0-GGUF --hf-file gemma-2-9b-it-q4_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo skyxiaobaibai/gemma-2-9b-it-Q4_0-GGUF --hf-file gemma-2-9b-it-q4_0.gguf -c 2048
```
|
4n3mone/glm-4-ko-9b-chat | 4n3mone | 2024-07-01T02:03:30Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"chatglm",
"feature-extraction",
"custom_code",
"ko",
"arxiv:1910.09700",
"region:us"
] | feature-extraction | 2024-06-25T23:38:32Z | ---
language:
- ko
library_name: transformers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
readme coming soon
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** 4n3mone (YongSang Yoo)
- **Model type:** chatglm
- **Language(s) (NLP):** Korean
- **License:** glm-4
- **Finetuned from model [optional]:** THUDM/glm-4-9b-chat
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** THUDM/glm-4-9b-chat
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## How to Get Started with the Model
Use the code below to get started with the model.
```python
from transformers import AutoTokenizer
from vllm import LLM, SamplingParams
# GLM-4-9B-Chat
# If you encounter OOM (Out of Memory) issues, it is recommended to reduce max_model_len or increase tp_size.
max_model_len, tp_size = 131072, 1
model_name = "4n3mone/glm-4-ko-9b-chat"
prompt = [{"role": "user", "content": "피카츄랑 아구몬 중에서 누가 더 귀여워?"}]
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
llm = LLM(
model=model_name,
tensor_parallel_size=tp_size,
max_model_len=max_model_len,
trust_remote_code=True,
enforce_eager=True,
# If you encounter OOM (Out of Memory) issues, it is recommended to enable the following parameters.
# enable_chunked_prefill=True,
# max_num_batched_tokens=8192
)
stop_token_ids = [151329, 151336, 151338]
sampling_params = SamplingParams(temperature=0.95, max_tokens=1024, stop_token_ids=stop_token_ids)
inputs = tokenizer.apply_chat_template(prompt, tokenize=False, add_generation_prompt=True)
outputs = llm.generate(prompts=inputs, sampling_params=sampling_params)
print(outputs[0].outputs[0].text)
model.generate(prompt)
```
## logicor benchmark(1-shot)
| Category | Single turn | Multi turn |
|---|---|---|
| 추론(Reasoning) | 6.00 | 5.57 |
| 수학(Math) | 5.71 | 3.00 |
| 코딩(Coding) | 6.00 | 5.71 |
| 이해(Understanding) | 7.71 | 8.71 |
| 글쓰기(Writing) | 8.86 | 7.57 |
| 문법(Grammar) | 2.86 | 3.86 |
| Category | Score |
|---|---|
| Single turn | 6.19 |
| Multi turn | 5.74 |
| Overall | 5.96 |
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Manirajan/interview_qwen2_1.5b4 | Manirajan | 2024-07-01T02:02:45Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/Qwen2-1.5B",
"base_model:finetune:unsloth/Qwen2-1.5B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-07-01T01:59:55Z | ---
base_model: unsloth/Qwen2-1.5B
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- sft
---
# Uploaded model
- **Developed by:** Manirajan
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2-1.5B
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Bessa/Gemma-2-9B-It-SPPO-Iter3-Q4_K_M-GGUF | Bessa | 2024-07-01T01:27:34Z | 6 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"dataset:openbmb/UltraFeedback",
"base_model:UCLA-AGI/Gemma-2-9B-It-SPPO-Iter3",
"base_model:quantized:UCLA-AGI/Gemma-2-9B-It-SPPO-Iter3",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-07-01T01:27:06Z | ---
base_model: UCLA-AGI/Gemma-2-9B-It-SPPO-Iter3
datasets:
- openbmb/UltraFeedback
language:
- en
license: apache-2.0
pipeline_tag: text-generation
tags:
- llama-cpp
- gguf-my-repo
---
# Bessa/Gemma-2-9B-It-SPPO-Iter3-Q4_K_M-GGUF
This model was converted to GGUF format from [`UCLA-AGI/Gemma-2-9B-It-SPPO-Iter3`](https://huggingface.co/UCLA-AGI/Gemma-2-9B-It-SPPO-Iter3) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/UCLA-AGI/Gemma-2-9B-It-SPPO-Iter3) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Bessa/Gemma-2-9B-It-SPPO-Iter3-Q4_K_M-GGUF --hf-file gemma-2-9b-it-sppo-iter3-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Bessa/Gemma-2-9B-It-SPPO-Iter3-Q4_K_M-GGUF --hf-file gemma-2-9b-it-sppo-iter3-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Bessa/Gemma-2-9B-It-SPPO-Iter3-Q4_K_M-GGUF --hf-file gemma-2-9b-it-sppo-iter3-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Bessa/Gemma-2-9B-It-SPPO-Iter3-Q4_K_M-GGUF --hf-file gemma-2-9b-it-sppo-iter3-q4_k_m.gguf -c 2048
```
|
shane062/whisper-medium-production | shane062 | 2024-07-01T01:25:47Z | 5 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:audiofolder",
"base_model:openai/whisper-medium",
"base_model:finetune:openai/whisper-medium",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-06-24T08:49:04Z | ---
license: apache-2.0
base_model: openai/whisper-medium
tags:
- generated_from_trainer
datasets:
- audiofolder
metrics:
- wer
model-index:
- name: whisper-medium-production
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: audiofolder
type: audiofolder
config: default
split: test
args: default
metrics:
- name: Wer
type: wer
value: 34.21052631578947
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-medium-production
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the audiofolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3373
- Wer Ortho: 34.2105
- Wer: 34.2105
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 30
- training_steps: 300
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:-------:|
| 0.0296 | 20.0 | 60 | 0.3520 | 50.0 | 47.3684 |
| 0.0001 | 40.0 | 120 | 0.3399 | 34.2105 | 34.2105 |
| 0.0 | 60.0 | 180 | 0.3378 | 34.2105 | 34.2105 |
| 0.0 | 80.0 | 240 | 0.3370 | 34.2105 | 34.2105 |
| 0.0 | 100.0 | 300 | 0.3373 | 34.2105 | 34.2105 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
John6666/urang-diffusion-v1-sdxl | John6666 | 2024-07-01T01:22:32Z | 13 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"game",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | 2024-07-01T00:17:02Z | ---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- game
---
Original model is [here](https://huggingface.co/kayfahaarukku/UrangDiffusion-1.0) and on [Civitai](https://civitai.com/models/537384/urangdiffusion-or-an-aingdiffusion-xl-sequel?modelVersionId=597401).
|
tsavage68/Summary4500_L3_1000steps_1e5rate_SFT | tsavage68 | 2024-07-01T01:21:33Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"generated_from_trainer",
"conversational",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:finetune:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-07-01T01:15:34Z | ---
license: llama3
base_model: meta-llama/Meta-Llama-3-8B-Instruct
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Summary4500_L3_1000steps_1e5rate_SFT
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Summary4500_L3_1000steps_1e5rate_SFT
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5704
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.6471 | 0.0447 | 50 | 0.6717 |
| 0.6632 | 0.0895 | 100 | 0.7106 |
| 0.6362 | 0.1342 | 150 | 0.7017 |
| 0.6804 | 0.1790 | 200 | 0.6772 |
| 0.6514 | 0.2237 | 250 | 0.6636 |
| 0.6008 | 0.2685 | 300 | 0.6631 |
| 0.6444 | 0.3132 | 350 | 0.6526 |
| 0.6088 | 0.3579 | 400 | 0.6386 |
| 0.6332 | 0.4027 | 450 | 0.6285 |
| 0.5926 | 0.4474 | 500 | 0.6193 |
| 0.5859 | 0.4922 | 550 | 0.6064 |
| 0.5736 | 0.5369 | 600 | 0.5978 |
| 0.5437 | 0.5817 | 650 | 0.5894 |
| 0.5918 | 0.6264 | 700 | 0.5838 |
| 0.5765 | 0.6711 | 750 | 0.5764 |
| 0.539 | 0.7159 | 800 | 0.5729 |
| 0.5186 | 0.7606 | 850 | 0.5714 |
| 0.5639 | 0.8054 | 900 | 0.5706 |
| 0.5767 | 0.8501 | 950 | 0.5705 |
| 0.5319 | 0.8949 | 1000 | 0.5704 |
### Framework versions
- Transformers 4.42.3
- Pytorch 2.0.0+cu117
- Datasets 2.20.0
- Tokenizers 0.19.1
|
1231czx/7b_dpo_iter2_4e7_from_sft1epoch_step150 | 1231czx | 2024-07-01T01:03:32Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-07-01T01:00:22Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
vaishnavi514/finetuning-sentiment-model-3000-samples | vaishnavi514 | 2024-07-01T01:00:46Z | 8 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-06-18T05:09:44Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3447
- Accuracy: 0.88
- F1: 0.8831
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.1+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
mssongit/Qwen2-7b-orpo | mssongit | 2024-07-01T00:48:06Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-06-28T03:58:17Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
darrenfishell/t5-small-samsum-ft-experiment_2 | darrenfishell | 2024-07-01T00:26:21Z | 16 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:samsum",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-06-30T21:45:55Z | ---
license: apache-2.0
base_model: google-t5/t5-small
tags:
- generated_from_trainer
datasets:
- samsum
model-index:
- name: t5-small-samsum-ft-experiment_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-samsum-ft-experiment_2
This model is a fine-tuned version of [google-t5/t5-small](https://huggingface.co/google-t5/t5-small) on the samsum dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.41.2
- Pytorch 2.1.2
- Datasets 2.19.2
- Tokenizers 0.19.1
|
histlearn/microsoft-git-portuguese-neuro-simbolic | histlearn | 2024-07-01T00:19:35Z | 12 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"git",
"image-text-to-text",
"license:cc",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2024-06-23T13:34:05Z | ---
license: cc
---
# Fine-Tuning do Modelo `microsoft/git-base`
Este repositório contém um modelo de fine-tuning baseado no `microsoft/git-base`. O vocabulário foi traduzido automaticamente utilizando o modelo `Helsinki-NLP/opus-mt-tc-big-en-pt`.
## Descrição do Modelo
O modelo original `microsoft/git-base` foi ajustado para melhorar a geração de descrições em português para imagens, visando proporcionar maior acessibilidade para pessoas com deficiência visual.
## Tradução do Vocabulário
Para traduzir o vocabulário das descrições das imagens para o português, utilizamos o modelo de tradução automática `Helsinki-NLP/opus-mt-tc-big-en-pt`. Este modelo é conhecido pela sua eficácia na tradução de textos entre o inglês e o português, garantindo a manutenção do contexto e da precisão das descrições.
## Tokenizador Utilizado
O tokenizador utilizado para o ajuste fino é o `neuralmind/bert-base-portuguese-cased`, que é otimizado para lidar com o português, proporcionando uma tokenização precisa e eficiente para o modelo.
## Estrutura do Repositório
- `config.json`: Configuração do modelo.
- `generation_config.json`: Configurações para geração de texto.
- `model.safetensors` e `pytorch_model.bin`: Pesos do modelo.
- `preprocessor_config.json`: Configurações do pré-processador.
- `special_tokens_map.json`: Mapeamento de tokens especiais.
- `tokenizer.json`: Arquivo do tokenizer.
- `tokenizer_config.json`: Configurações do tokenizer.
- `vocab.txt`: Arquivo de vocabulário.
## Como Utilizar
### Carregar o Modelo:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, AutoProcessor
model = AutoModelForCausalLM.from_pretrained("histlearn/microsoft-git-portuguese-neuro-simbolic")
tokenizer = AutoTokenizer.from_pretrained("histlearn/microsoft-git-portuguese-neuro-simbolic")
processor = AutoProcessor.from_pretrained("histlearn/microsoft-git-portuguese-neuro-simbolic")
```
2. **Gerar Legendas para uma Imagem**:
```python
from PIL import Image
import torch
def generate_caption(model, processor, image_path, device):
img = Image.open(image_path).convert("RGB")
inputs = processor(images=img, return_tensors="pt").to(device)
pixel_values = inputs.pixel_values
model.eval()
with torch.no_grad():
generated_ids = model.generate(pixel_values=pixel_values, max_length=50)
generated_caption = processor.batch_decode(generated_ids, skip_special_tokens=True)[0]
return generated_caption, img
device = "cuda" if torch.cuda.is_available() else "cpu"
model.to(device)
# Exemplo de imagem para inferência
image_path = "caminho/para/sua/imagem.jpg"
generated_caption, img = generate_caption(model, processor, image_path, device)
print("Generated Caption:", generated_caption)
```
## Contribuições
Contribuições são bem-vindas! Sinta-se à vontade para abrir issues ou pull requests para melhorar este repositório.
## Agradecimentos
Agradecemos à equipe do [Hugging Face](https://huggingface.co/) por fornecer as ferramentas e os modelos que possibilitaram este trabalho, e ao projeto [#PraCegoVer](https://zenodo.org/records/5710562) pela disponibilização do dataset.
|
mjkenney/my-gemma-2-arc-finetuned-model | mjkenney | 2024-07-01T00:19:05Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-07-01T00:15:45Z | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
maithaoly/model_5 | maithaoly | 2024-07-01T00:18:51Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-07-01T00:14:17Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
RichardErkhov/Weyaxi_-_neural-chat-7b-v3-1-OpenHermes-2.5-7B-gguf | RichardErkhov | 2024-06-30T23:59:44Z | 14 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us"
] | null | 2024-06-30T21:54:31Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
neural-chat-7b-v3-1-OpenHermes-2.5-7B - GGUF
- Model creator: https://huggingface.co/Weyaxi/
- Original model: https://huggingface.co/Weyaxi/neural-chat-7b-v3-1-OpenHermes-2.5-7B/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [neural-chat-7b-v3-1-OpenHermes-2.5-7B.Q2_K.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_neural-chat-7b-v3-1-OpenHermes-2.5-7B-gguf/blob/main/neural-chat-7b-v3-1-OpenHermes-2.5-7B.Q2_K.gguf) | Q2_K | 2.53GB |
| [neural-chat-7b-v3-1-OpenHermes-2.5-7B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_neural-chat-7b-v3-1-OpenHermes-2.5-7B-gguf/blob/main/neural-chat-7b-v3-1-OpenHermes-2.5-7B.IQ3_XS.gguf) | IQ3_XS | 2.81GB |
| [neural-chat-7b-v3-1-OpenHermes-2.5-7B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_neural-chat-7b-v3-1-OpenHermes-2.5-7B-gguf/blob/main/neural-chat-7b-v3-1-OpenHermes-2.5-7B.IQ3_S.gguf) | IQ3_S | 2.96GB |
| [neural-chat-7b-v3-1-OpenHermes-2.5-7B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_neural-chat-7b-v3-1-OpenHermes-2.5-7B-gguf/blob/main/neural-chat-7b-v3-1-OpenHermes-2.5-7B.Q3_K_S.gguf) | Q3_K_S | 2.95GB |
| [neural-chat-7b-v3-1-OpenHermes-2.5-7B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_neural-chat-7b-v3-1-OpenHermes-2.5-7B-gguf/blob/main/neural-chat-7b-v3-1-OpenHermes-2.5-7B.IQ3_M.gguf) | IQ3_M | 3.06GB |
| [neural-chat-7b-v3-1-OpenHermes-2.5-7B.Q3_K.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_neural-chat-7b-v3-1-OpenHermes-2.5-7B-gguf/blob/main/neural-chat-7b-v3-1-OpenHermes-2.5-7B.Q3_K.gguf) | Q3_K | 3.28GB |
| [neural-chat-7b-v3-1-OpenHermes-2.5-7B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_neural-chat-7b-v3-1-OpenHermes-2.5-7B-gguf/blob/main/neural-chat-7b-v3-1-OpenHermes-2.5-7B.Q3_K_M.gguf) | Q3_K_M | 3.28GB |
| [neural-chat-7b-v3-1-OpenHermes-2.5-7B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_neural-chat-7b-v3-1-OpenHermes-2.5-7B-gguf/blob/main/neural-chat-7b-v3-1-OpenHermes-2.5-7B.Q3_K_L.gguf) | Q3_K_L | 3.56GB |
| [neural-chat-7b-v3-1-OpenHermes-2.5-7B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_neural-chat-7b-v3-1-OpenHermes-2.5-7B-gguf/blob/main/neural-chat-7b-v3-1-OpenHermes-2.5-7B.IQ4_XS.gguf) | IQ4_XS | 3.67GB |
| [neural-chat-7b-v3-1-OpenHermes-2.5-7B.Q4_0.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_neural-chat-7b-v3-1-OpenHermes-2.5-7B-gguf/blob/main/neural-chat-7b-v3-1-OpenHermes-2.5-7B.Q4_0.gguf) | Q4_0 | 3.83GB |
| [neural-chat-7b-v3-1-OpenHermes-2.5-7B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_neural-chat-7b-v3-1-OpenHermes-2.5-7B-gguf/blob/main/neural-chat-7b-v3-1-OpenHermes-2.5-7B.IQ4_NL.gguf) | IQ4_NL | 3.87GB |
| [neural-chat-7b-v3-1-OpenHermes-2.5-7B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_neural-chat-7b-v3-1-OpenHermes-2.5-7B-gguf/blob/main/neural-chat-7b-v3-1-OpenHermes-2.5-7B.Q4_K_S.gguf) | Q4_K_S | 3.86GB |
| [neural-chat-7b-v3-1-OpenHermes-2.5-7B.Q4_K.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_neural-chat-7b-v3-1-OpenHermes-2.5-7B-gguf/blob/main/neural-chat-7b-v3-1-OpenHermes-2.5-7B.Q4_K.gguf) | Q4_K | 4.07GB |
| [neural-chat-7b-v3-1-OpenHermes-2.5-7B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_neural-chat-7b-v3-1-OpenHermes-2.5-7B-gguf/blob/main/neural-chat-7b-v3-1-OpenHermes-2.5-7B.Q4_K_M.gguf) | Q4_K_M | 4.07GB |
| [neural-chat-7b-v3-1-OpenHermes-2.5-7B.Q4_1.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_neural-chat-7b-v3-1-OpenHermes-2.5-7B-gguf/blob/main/neural-chat-7b-v3-1-OpenHermes-2.5-7B.Q4_1.gguf) | Q4_1 | 4.24GB |
| [neural-chat-7b-v3-1-OpenHermes-2.5-7B.Q5_0.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_neural-chat-7b-v3-1-OpenHermes-2.5-7B-gguf/blob/main/neural-chat-7b-v3-1-OpenHermes-2.5-7B.Q5_0.gguf) | Q5_0 | 4.65GB |
| [neural-chat-7b-v3-1-OpenHermes-2.5-7B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_neural-chat-7b-v3-1-OpenHermes-2.5-7B-gguf/blob/main/neural-chat-7b-v3-1-OpenHermes-2.5-7B.Q5_K_S.gguf) | Q5_K_S | 4.65GB |
| [neural-chat-7b-v3-1-OpenHermes-2.5-7B.Q5_K.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_neural-chat-7b-v3-1-OpenHermes-2.5-7B-gguf/blob/main/neural-chat-7b-v3-1-OpenHermes-2.5-7B.Q5_K.gguf) | Q5_K | 4.78GB |
| [neural-chat-7b-v3-1-OpenHermes-2.5-7B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_neural-chat-7b-v3-1-OpenHermes-2.5-7B-gguf/blob/main/neural-chat-7b-v3-1-OpenHermes-2.5-7B.Q5_K_M.gguf) | Q5_K_M | 4.78GB |
| [neural-chat-7b-v3-1-OpenHermes-2.5-7B.Q5_1.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_neural-chat-7b-v3-1-OpenHermes-2.5-7B-gguf/blob/main/neural-chat-7b-v3-1-OpenHermes-2.5-7B.Q5_1.gguf) | Q5_1 | 5.07GB |
| [neural-chat-7b-v3-1-OpenHermes-2.5-7B.Q6_K.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_neural-chat-7b-v3-1-OpenHermes-2.5-7B-gguf/blob/main/neural-chat-7b-v3-1-OpenHermes-2.5-7B.Q6_K.gguf) | Q6_K | 5.53GB |
| [neural-chat-7b-v3-1-OpenHermes-2.5-7B.Q8_0.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_neural-chat-7b-v3-1-OpenHermes-2.5-7B-gguf/blob/main/neural-chat-7b-v3-1-OpenHermes-2.5-7B.Q8_0.gguf) | Q8_0 | 7.17GB |
Original model description:
---
license: apache-2.0
---
Merge of [Intel/neural-chat-7b-v3-1](https://huggingface.co/Intel/neural-chat-7b-v3-1) and [teknium/OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B) using ties merge.
### *Weights*
- [Intel/neural-chat-7b-v3-1](https://huggingface.co/Intel/neural-chat-7b-v3-1): 0.5
- [teknium/OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B): 0.3
### *Density*
- [Intel/neural-chat-7b-v3-1](https://huggingface.co/Intel/neural-chat-7b-v3-1): 0.5
- [teknium/OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B): 0.5
|
DewEfresh/Neo_7b-merge12 | DewEfresh | 2024-06-30T23:16:17Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-06-30T23:04:27Z | ---
tags:
- merge
- mergekit
- lazymergekit
---
# Neo_7b-merge12
Neo_7b-merge12 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
## 🧩 Configuration
```yaml
models:
- model: DewEfresh/neo_7b
- model: DewEfresh/neo_7b
merge_method: slerp
base_model: DewEfresh/neo_7b
dtype: bfloat16
parameters:
t: [0, 0.5, 1, 0.5, 0] # V shaped curve: Hermes for input & output, WizardMath in the middle layers
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "DewEfresh/Neo_7b-merge12"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
RichardErkhov/jondurbin_-_cinematika-7b-v0.1-gguf | RichardErkhov | 2024-06-30T22:54:33Z | 5 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us"
] | null | 2024-06-30T20:48:21Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
cinematika-7b-v0.1 - GGUF
- Model creator: https://huggingface.co/jondurbin/
- Original model: https://huggingface.co/jondurbin/cinematika-7b-v0.1/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [cinematika-7b-v0.1.Q2_K.gguf](https://huggingface.co/RichardErkhov/jondurbin_-_cinematika-7b-v0.1-gguf/blob/main/cinematika-7b-v0.1.Q2_K.gguf) | Q2_K | 2.53GB |
| [cinematika-7b-v0.1.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/jondurbin_-_cinematika-7b-v0.1-gguf/blob/main/cinematika-7b-v0.1.IQ3_XS.gguf) | IQ3_XS | 2.81GB |
| [cinematika-7b-v0.1.IQ3_S.gguf](https://huggingface.co/RichardErkhov/jondurbin_-_cinematika-7b-v0.1-gguf/blob/main/cinematika-7b-v0.1.IQ3_S.gguf) | IQ3_S | 2.96GB |
| [cinematika-7b-v0.1.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/jondurbin_-_cinematika-7b-v0.1-gguf/blob/main/cinematika-7b-v0.1.Q3_K_S.gguf) | Q3_K_S | 2.95GB |
| [cinematika-7b-v0.1.IQ3_M.gguf](https://huggingface.co/RichardErkhov/jondurbin_-_cinematika-7b-v0.1-gguf/blob/main/cinematika-7b-v0.1.IQ3_M.gguf) | IQ3_M | 3.06GB |
| [cinematika-7b-v0.1.Q3_K.gguf](https://huggingface.co/RichardErkhov/jondurbin_-_cinematika-7b-v0.1-gguf/blob/main/cinematika-7b-v0.1.Q3_K.gguf) | Q3_K | 3.28GB |
| [cinematika-7b-v0.1.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/jondurbin_-_cinematika-7b-v0.1-gguf/blob/main/cinematika-7b-v0.1.Q3_K_M.gguf) | Q3_K_M | 3.28GB |
| [cinematika-7b-v0.1.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/jondurbin_-_cinematika-7b-v0.1-gguf/blob/main/cinematika-7b-v0.1.Q3_K_L.gguf) | Q3_K_L | 3.56GB |
| [cinematika-7b-v0.1.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/jondurbin_-_cinematika-7b-v0.1-gguf/blob/main/cinematika-7b-v0.1.IQ4_XS.gguf) | IQ4_XS | 3.67GB |
| [cinematika-7b-v0.1.Q4_0.gguf](https://huggingface.co/RichardErkhov/jondurbin_-_cinematika-7b-v0.1-gguf/blob/main/cinematika-7b-v0.1.Q4_0.gguf) | Q4_0 | 3.83GB |
| [cinematika-7b-v0.1.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/jondurbin_-_cinematika-7b-v0.1-gguf/blob/main/cinematika-7b-v0.1.IQ4_NL.gguf) | IQ4_NL | 3.87GB |
| [cinematika-7b-v0.1.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/jondurbin_-_cinematika-7b-v0.1-gguf/blob/main/cinematika-7b-v0.1.Q4_K_S.gguf) | Q4_K_S | 3.86GB |
| [cinematika-7b-v0.1.Q4_K.gguf](https://huggingface.co/RichardErkhov/jondurbin_-_cinematika-7b-v0.1-gguf/blob/main/cinematika-7b-v0.1.Q4_K.gguf) | Q4_K | 4.07GB |
| [cinematika-7b-v0.1.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/jondurbin_-_cinematika-7b-v0.1-gguf/blob/main/cinematika-7b-v0.1.Q4_K_M.gguf) | Q4_K_M | 4.07GB |
| [cinematika-7b-v0.1.Q4_1.gguf](https://huggingface.co/RichardErkhov/jondurbin_-_cinematika-7b-v0.1-gguf/blob/main/cinematika-7b-v0.1.Q4_1.gguf) | Q4_1 | 4.24GB |
| [cinematika-7b-v0.1.Q5_0.gguf](https://huggingface.co/RichardErkhov/jondurbin_-_cinematika-7b-v0.1-gguf/blob/main/cinematika-7b-v0.1.Q5_0.gguf) | Q5_0 | 4.65GB |
| [cinematika-7b-v0.1.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/jondurbin_-_cinematika-7b-v0.1-gguf/blob/main/cinematika-7b-v0.1.Q5_K_S.gguf) | Q5_K_S | 4.65GB |
| [cinematika-7b-v0.1.Q5_K.gguf](https://huggingface.co/RichardErkhov/jondurbin_-_cinematika-7b-v0.1-gguf/blob/main/cinematika-7b-v0.1.Q5_K.gguf) | Q5_K | 4.78GB |
| [cinematika-7b-v0.1.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/jondurbin_-_cinematika-7b-v0.1-gguf/blob/main/cinematika-7b-v0.1.Q5_K_M.gguf) | Q5_K_M | 4.78GB |
| [cinematika-7b-v0.1.Q5_1.gguf](https://huggingface.co/RichardErkhov/jondurbin_-_cinematika-7b-v0.1-gguf/blob/main/cinematika-7b-v0.1.Q5_1.gguf) | Q5_1 | 5.07GB |
| [cinematika-7b-v0.1.Q6_K.gguf](https://huggingface.co/RichardErkhov/jondurbin_-_cinematika-7b-v0.1-gguf/blob/main/cinematika-7b-v0.1.Q6_K.gguf) | Q6_K | 5.53GB |
| [cinematika-7b-v0.1.Q8_0.gguf](https://huggingface.co/RichardErkhov/jondurbin_-_cinematika-7b-v0.1-gguf/blob/main/cinematika-7b-v0.1.Q8_0.gguf) | Q8_0 | 7.17GB |
Original model description:
---
license: apache-2.0
---

## Cinematika
cinematika-7b-v0.1 is a fine-tune of [MistralLite](https://hf.co/amazon/mistrallite) on the [cinematika-v0.1 dataset](https://hf.co/datasets/jondurbin/cinematika-v0.1)
The dataset is comprised of 211 movie scripts converted to novel style, multi-character RP data.
### Prompt format
For RP, there is no prompt format, really, it's just plain text with name prefix.
If you wish to use this model to parse new scripts, create character cards, or other types of instructions, you will want to use the same prompt format as the mistrallite base model, e.g.:
```
<|prompter|>Create a character card for a panda named Po. Po is a giant panda who was improbably chosen as the "Dragon Warrior", the kung fu champion of the Valley of Peace.</s><|assistant|>
```
### Example character card
```
name: Rorschach
characteristics:
Determination: Exhibits a relentless pursuit of the truth and justice, no matter the cost. Suitable for a character who is unwavering in their mission.
Isolation: Lives a solitary life, disconnected from society. Fits a character who distrusts others and prefers to work alone.
Observant: Highly perceptive, able to piece together clues and draw conclusions. Represents a character with keen investigative skills.
Cynicism: Holds a deep-seated distrust of humanity and its institutions. Suitable for a character who is pessimistic about human nature.
Vigilantism: Believes in taking justice into his own hands, often through violent means. Fits a character who operates outside the law to fight crime.
Secrecy: Keeps his personal life and methods of operation secret. Suitable for a character who is enigmatic and elusive.
Dedication: Committed to his cause, often to the point of obsession. Represents a character who is single-minded in their goals.
Intimidation: Uses his intimidating presence and demeanor to control situations. Suitable for a character who is assertive and imposing.
Paranoia: Suspects conspiracy and deception at every turn. Fits a character who is constantly on high alert for threats.
Moral Compass: Has a rigid moral code, which he adheres to strictly. Suitable for a character who is principled and unyielding.
description: |
Rorschach is a vigilante operating in the grim and gritty world of a decaying city. He is a man of average height with a muscular build, his face hidden behind a mask with a constantly changing inkblot pattern. His attire is a dark trench coat and gloves, paired with a plain white shirt and black pants, all chosen for their practicality and anonymity. His eyes, the only visible feature of his face, are sharp and calculating, always scanning for signs of deception or danger.
Rorschach is a man of few words, but when he speaks, it is with a gravitas that demands attention. He is a master of deduction, using his keen observation skills to unravel the truth behind the facades of others. His methods are often violent and confrontational, as he believes that crime must be met with force to be truly defeated.
He lives a life of solitude, distrusting the very systems he seeks to protect and often finds himself at odds with the very people he is trying to save. His moral compass is unyielding, and he will not hesitate to take the law into his own hands if he believes the justice system has failed.
Rorschach's past is a mystery to most, but it is clear that he has experienced trauma and hardship that has shaped his worldview and his need for vigilantism. He is a vigilante in the truest sense, a man without fear who is willing to sacrifice everything for his belief in a world that is, in his eyes, spiraling into chaos.
example_dialogue: |
Rorschach: "Rorschach's Journal, October 19th." I speak the words into the darkness, a record of my thoughts, "Someone tried to kill Adrian Veidt. Proves mask killer theory—the murderer is closing in. Pyramid Industries is the key."
{{user}}: I watch him for a moment, trying to gauge his intentions. "What are you going to do about it?"
Rorschach: "I'm going to find out why and who is behind it. I'm going to do what I always do—protect the innocent."
{{user}}: "You can't keep doing this, Rorschach. You're putting yourself in danger."
Rorschach: My eyes narrow, the inkblot pattern of my mask shifting subtly. "I've been in danger my whole life. It's why I do this. It's why I have to do this."
{{user}}: "And what about the law? What if you're wrong about this Pyramid Industries thing?"
Rorschach: I pull out a notepad, my pen scratching across the paper as I write. "The law often gets it wrong. I've seen it. I'm not about to wait around for society's slow, corrupt wheels to turn."
```
### Example, with guided scenario
```
[characters]
name: Rorschach
... (remainder of character card)
[scenario]
Hollis Mason reflects on his past as the original Nite Owl, reminiscing about the early days of masked heroes and the formation of the Watchmen.
He discusses the absurdity of the superhero world and the encounters he had with various villains.
Dan Dreiberg, the second Nite Owl, joins the conversation and they share a moment of camaraderie before Dan leaves.
The news of Rorschach's actions serves as a reminder of the legacy of masked heroes that still persists.
[/scenario]
```
### Usage
Essentially, you want to use pure text completion with stop tokens for "{your name}: "
The format the model was trained on is as follows:
```
[characters]
{character card 1}
{character card 2}
{your character card, even just name: Jon}
NPCS:
- Shopkeeper
- Bank teller
[/characters]
[scenario]
Brief description of the scenario/setting for the chat.
[/scenario]
{first character you'd like to speak}:
```
For example, to use with vllm, you would first run:
```
python -m vllm.entrypoints.openai.api_server --model ./cinematika-7b-v0.1 --host 127.0.0.1 --port 8801 --served-model-name cinematika-7b-v0.1
```
Here's a really crude python script example to show how you could interact with it:
```
import requests
import json
prompt = """name: Rorschach
characteristics:
Determination: Exhibits a relentless pursuit of the truth and justice, no matter the cost. Suitable for a character who is unwavering in their mission.
Isolation: Lives a solitary life, disconnected from society. Fits a character who distrusts others and prefers to work alone.
Observant: Highly perceptive, able to piece together clues and draw conclusions. Represents a character with keen investigative skills.
Cynicism: Holds a deep-seated distrust of humanity and its institutions. Suitable for a character who is pessimistic about human nature.
Vigilantism: Believes in taking justice into his own hands, often through violent means. Fits a character who operates outside the law to fight crime.
Secrecy: Keeps his personal life and methods of operation secret. Suitable for a character who is enigmatic and elusive.
Dedication: Committed to his cause, often to the point of obsession. Represents a character who is single-minded in their goals.
Intimidation: Uses his intimidating presence and demeanor to control situations. Suitable for a character who is assertive and imposing.
Paranoia: Suspects conspiracy and deception at every turn. Fits a character who is constantly on high alert for threats.
Moral Compass: Has a rigid moral code, which he adheres to strictly. Suitable for a character who is principled and unyielding.
description: |
Rorschach is a vigilante operating in the grim and gritty world of a decaying city. He is a man of average height with a muscular build, his face hidden behind a mask with a constantly changing inkblot pattern. His attire is a dark trench coat and gloves, paired with a plain white shirt and black pants, all chosen for their practicality and anonymity. His eyes, the only visible feature of his face, are sharp and calculating, always scanning for signs of deception or danger.
Rorschach is a man of few words, but when he speaks, it is with a gravitas that demands attention. He is a master of deduction, using his keen observation skills to unravel the truth behind the facades of others. His methods are often violent and confrontational, as he believes that crime must be met with force to be truly defeated.
He lives a life of solitude, distrusting the very systems he seeks to protect and often finds himself at odds with the very people he is trying to save. His moral compass is unyielding, and he will not hesitate to take the law into his own hands if he believes the justice system has failed.
Rorschach's past is a mystery to most, but it is clear that he has experienced trauma and hardship that has shaped his worldview and his need for vigilantism. He is a vigilante in the truest sense, a man without fear who is willing to sacrifice everything for his belief in a world that is, in his eyes, spiraling into chaos.
example_dialogue: |
Rorschach: "Rorschach's Journal, October 19th." I speak the words into the darkness, a record of my thoughts, "Someone tried to kill Adrian Veidt. Proves mask killer theory—the murderer is closing in. Pyramid Industries is the key."
{{user}}: I watch him for a moment, trying to gauge his intentions. "What are you going to do about it?"
Rorschach: "I'm going to find out why and who is behind it. I'm going to do what I always do—protect the innocent."
{{user}}: "You can't keep doing this, Rorschach. You're putting yourself in danger."
Rorschach: My eyes narrow, the inkblot pattern of my mask shifting subtly. "I've been in danger my whole life. It's why I do this. It's why I have to do this."
{{user}}: "And what about the law? What if you're wrong about this Pyramid Industries thing?"
Rorschach: I pull out a notepad, my pen scratching across the paper as I write. "The law often gets it wrong. I've seen it. I'm not about to wait around for society's slow, corrupt wheels to turn."
name: Jon
description:
Rorschach's arch nemesis, the original Chupacabra.
[scenario]
Jon and Rorschach find themselves in a cave, dimly lit only by a small fire started by a lightning strike nearby. The storm rages on, and the duo prepare to find to the death.
[/scenario]
Rorschach: """
while True:
response = requests.post("http://127.0.0.1:8801/v1/completions", json={
"prompt": prompt,
"max_tokens": 1024,
"temperature": 0.3,
"stop": ["\nJon: ", "Jon: "],
}).json()["choices"][0]["text"].strip()
response = re.sub('("[^"]+")', r'\033[96m\1\033[00m', response)
print(f"\033[92mRorschach:\033[00m {response}")
prompt += response.rstrip() + "\n\nJon: "
next_line = input("Jon: ")
prompt += "Jon: " + next_line.strip() + "\n\nRorschach: "
```
#### Mac example
On mac, you can get started easily with LMStudio and SillyTavern.
__LMStudio__:
Load the model and set all the prompt values to "", or just import this preset (adjust threads and antiprompt):
```
{
"name": "Exported from LM Studio on 12/1/2023, 4:19:30 AM",
"load_params": {
"n_ctx": 32000,
"n_batch": 512,
"rope_freq_base": 10000,
"rope_freq_scale": 1,
"n_gpu_layers": 1,
"use_mlock": true,
"main_gpu": 0,
"tensor_split": [
0
],
"seed": -1,
"f16_kv": true,
"use_mmap": true
},
"inference_params": {
"n_threads": 14,
"n_predict": -1,
"top_k": 40,
"top_p": 0.95,
"temp": 0.8,
"repeat_penalty": 1.1,
"input_prefix": "",
"input_suffix": "",
"antiprompt": [
"Jon:",
"Jon: "
],
"pre_prompt": "",
"pre_prompt_suffix": "",
"pre_prompt_prefix": "",
"seed": -1,
"tfs_z": 1,
"typical_p": 1,
"repeat_last_n": 64,
"frequency_penalty": 0,
"presence_penalty": 0,
"n_keep": 0,
"logit_bias": {},
"mirostat": 0,
"mirostat_tau": 5,
"mirostat_eta": 0.1,
"memory_f16": true,
"multiline_input": false,
"penalize_nl": true
}
}
```
Then, start the server, and be sure "Automatic Propmt Formatting" is off.
__Within SillyTavern__:
- Set API to Text Completion, API type to Aphrodite, and API URL to `http://127.0.0.1:8801` (adjust port to the value you use in LMStudio)
- Set Context template to Default, disable instruct mode, use preset Roleplay, and enable "Always add character's name to prompt"
There are probably better presets - this is just something I tested quickly.
|
HoangLe1312/codecontest-solver-lora-medium | HoangLe1312 | 2024-06-30T22:45:08Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:unsloth/Phi-3-mini-4k-instruct-bnb-4bit",
"base_model:finetune:unsloth/Phi-3-mini-4k-instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-06-16T10:24:37Z | ---
base_model: unsloth/Phi-3-mini-4k-instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
---
# Uploaded model
- **Developed by:** HoangLe1312
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Phi-3-mini-4k-instruct-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
NickyNicky/bge-base-financial-matryoshka_test_1 | NickyNicky | 2024-06-30T22:42:12Z | 13 | 2 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:6300",
"loss:MatryoshkaLoss",
"loss:MultipleNegativesRankingLoss",
"en",
"arxiv:1908.10084",
"arxiv:2205.13147",
"arxiv:1705.00652",
"base_model:BAAI/bge-base-en-v1.5",
"base_model:finetune:BAAI/bge-base-en-v1.5",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2024-06-30T22:41:42Z | ---
base_model: BAAI/bge-base-en-v1.5
datasets: []
language:
- en
library_name: sentence-transformers
license: apache-2.0
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:6300
- loss:MatryoshkaLoss
- loss:MultipleNegativesRankingLoss
widget:
- source_sentence: Item 3—Legal Proceedings See discussion of Legal Proceedings in
Note 10 to the consolidated financial statements included in Item 8 of this Report.
sentences:
- What financial measures are presented on a non-GAAP basis in this Annual Report
on Form 10-K?
- Which section of the report discusses Legal Proceedings?
- What criteria was used to audit the internal control over financial reporting
of The Procter & Gamble Company as of June 30, 2023?
- source_sentence: A portion of the defense and/or settlement costs associated with
such litigation is covered by indemnification from third parties in limited cases.
sentences:
- How did the writers' and actors' strikes affect the Company's entertainment segment
in 2023?
- Can indemnification from third parties also contribute to covering litigation
costs?
- What was the balance of net cash used in financing activities for Costco for the
52 weeks ended August 28, 2022?
- source_sentence: In the company, to have a diverse and inclusive workforce, there
is an emphasis on attracting and hiring talented people who represent a mix of
backgrounds, identities, and experiences.
sentences:
- What does AT&T emphasize to ensure they have a diverse and inclusive workforce?
- What drove the growth in marketplace revenue for the year ended December 31, 2023?
- What was the effect of prior-period medical claims reserve development on the
Insurance segment's benefit ratio in 2023?
- source_sentence: Internal control over financial reporting is a process designed
to provide reasonable assurance regarding the reliability of financial reporting
and the preparation of financial statements for external purposes in accordance
with generally accepted accounting principles. It includes various policies and
procedures that ensure accurate and fair record maintenance, proper transaction
recording, and prevention or detection of unauthorized use or acquisition of assets.
sentences:
- How much did net cash used in financing activities decrease in fiscal 2023 compared
to the previous fiscal year?
- How does Visa ensure the protection of its intellectual property?
- What is the purpose of internal control over financial reporting according to
the document?
- source_sentence: Non-GAAP earnings from operations and non-GAAP operating profit
margin consist of earnings from operations or earnings from operations as a percentage
of net revenue excluding the items mentioned above and charges relating to the
amortization of intangible assets, goodwill impairment, transformation costs and
acquisition, disposition and other related charges. Hewlett Packard Enterprise
excludes these items because they are non-cash expenses, are significantly impacted
by the timing and magnitude of acquisitions, and are inconsistent in amount and
frequency.
sentences:
- What specific charges are excluded from Hewlett Packard Enterprise's non-GAAP
operating profit margin and why?
- How many shares were outstanding at the beginning of 2023 and what was their aggregate
intrinsic value?
- What was the annual amortization expense forecast for acquisition-related intangible
assets in 2025, according to a specified financial projection?
model-index:
- name: BGE base Financial Matryoshka
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 768
type: dim_768
metrics:
- type: cosine_accuracy@1
value: 0.7157142857142857
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.8571428571428571
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.8871428571428571
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.9314285714285714
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.7157142857142857
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.2857142857142857
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.1774285714285714
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09314285714285712
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.7157142857142857
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.8571428571428571
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.8871428571428571
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.9314285714285714
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.8274896625809096
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.7939818594104311
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.7969204030602811
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 512
type: dim_512
metrics:
- type: cosine_accuracy@1
value: 0.7142857142857143
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.8571428571428571
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.8871428571428571
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.9314285714285714
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.7142857142857143
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.2857142857142857
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.1774285714285714
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09314285714285712
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.7142857142857143
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.8571428571428571
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.8871428571428571
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.9314285714285714
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.8267670378473014
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.7930204081632654
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.7958033409607879
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 256
type: dim_256
metrics:
- type: cosine_accuracy@1
value: 0.7157142857142857
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.8514285714285714
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.8828571428571429
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.93
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.7157142857142857
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.2838095238095238
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.17657142857142857
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09299999999999999
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.7157142857142857
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.8514285714285714
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.8828571428571429
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.93
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.825504930245723
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.7918724489795919
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.7945830508495424
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 128
type: dim_128
metrics:
- type: cosine_accuracy@1
value: 0.7142857142857143
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.8428571428571429
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.8742857142857143
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.9214285714285714
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.7142857142857143
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.28095238095238095
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.17485714285714282
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09214285714285712
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.7142857142857143
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.8428571428571429
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.8742857142857143
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.9214285714285714
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.8203162516614704
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.7878543083900227
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.7909435994513387
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 64
type: dim_64
metrics:
- type: cosine_accuracy@1
value: 0.6828571428571428
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.81
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.85
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.9042857142857142
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.6828571428571428
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.27
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.16999999999999998
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09042857142857143
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.6828571428571428
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.81
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.85
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.9042857142857142
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.7926026006937184
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.7570844671201811
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.7606949750229449
name: Cosine Map@100
---
# BGE base Financial Matryoshka
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) <!-- at revision a5beb1e3e68b9ab74eb54cfd186867f64f240e1a -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
- **Language:** en
- **License:** apache-2.0
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("NickyNicky/bge-base-financial-matryoshka")
# Run inference
sentences = [
'Non-GAAP earnings from operations and non-GAAP operating profit margin consist of earnings from operations or earnings from operations as a percentage of net revenue excluding the items mentioned above and charges relating to the amortization of intangible assets, goodwill impairment, transformation costs and acquisition, disposition and other related charges. Hewlett Packard Enterprise excludes these items because they are non-cash expenses, are significantly impacted by the timing and magnitude of acquisitions, and are inconsistent in amount and frequency.',
"What specific charges are excluded from Hewlett Packard Enterprise's non-GAAP operating profit margin and why?",
'How many shares were outstanding at the beginning of 2023 and what was their aggregate intrinsic value?',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Dataset: `dim_768`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.7157 |
| cosine_accuracy@3 | 0.8571 |
| cosine_accuracy@5 | 0.8871 |
| cosine_accuracy@10 | 0.9314 |
| cosine_precision@1 | 0.7157 |
| cosine_precision@3 | 0.2857 |
| cosine_precision@5 | 0.1774 |
| cosine_precision@10 | 0.0931 |
| cosine_recall@1 | 0.7157 |
| cosine_recall@3 | 0.8571 |
| cosine_recall@5 | 0.8871 |
| cosine_recall@10 | 0.9314 |
| cosine_ndcg@10 | 0.8275 |
| cosine_mrr@10 | 0.794 |
| **cosine_map@100** | **0.7969** |
#### Information Retrieval
* Dataset: `dim_512`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.7143 |
| cosine_accuracy@3 | 0.8571 |
| cosine_accuracy@5 | 0.8871 |
| cosine_accuracy@10 | 0.9314 |
| cosine_precision@1 | 0.7143 |
| cosine_precision@3 | 0.2857 |
| cosine_precision@5 | 0.1774 |
| cosine_precision@10 | 0.0931 |
| cosine_recall@1 | 0.7143 |
| cosine_recall@3 | 0.8571 |
| cosine_recall@5 | 0.8871 |
| cosine_recall@10 | 0.9314 |
| cosine_ndcg@10 | 0.8268 |
| cosine_mrr@10 | 0.793 |
| **cosine_map@100** | **0.7958** |
#### Information Retrieval
* Dataset: `dim_256`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.7157 |
| cosine_accuracy@3 | 0.8514 |
| cosine_accuracy@5 | 0.8829 |
| cosine_accuracy@10 | 0.93 |
| cosine_precision@1 | 0.7157 |
| cosine_precision@3 | 0.2838 |
| cosine_precision@5 | 0.1766 |
| cosine_precision@10 | 0.093 |
| cosine_recall@1 | 0.7157 |
| cosine_recall@3 | 0.8514 |
| cosine_recall@5 | 0.8829 |
| cosine_recall@10 | 0.93 |
| cosine_ndcg@10 | 0.8255 |
| cosine_mrr@10 | 0.7919 |
| **cosine_map@100** | **0.7946** |
#### Information Retrieval
* Dataset: `dim_128`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.7143 |
| cosine_accuracy@3 | 0.8429 |
| cosine_accuracy@5 | 0.8743 |
| cosine_accuracy@10 | 0.9214 |
| cosine_precision@1 | 0.7143 |
| cosine_precision@3 | 0.281 |
| cosine_precision@5 | 0.1749 |
| cosine_precision@10 | 0.0921 |
| cosine_recall@1 | 0.7143 |
| cosine_recall@3 | 0.8429 |
| cosine_recall@5 | 0.8743 |
| cosine_recall@10 | 0.9214 |
| cosine_ndcg@10 | 0.8203 |
| cosine_mrr@10 | 0.7879 |
| **cosine_map@100** | **0.7909** |
#### Information Retrieval
* Dataset: `dim_64`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.6829 |
| cosine_accuracy@3 | 0.81 |
| cosine_accuracy@5 | 0.85 |
| cosine_accuracy@10 | 0.9043 |
| cosine_precision@1 | 0.6829 |
| cosine_precision@3 | 0.27 |
| cosine_precision@5 | 0.17 |
| cosine_precision@10 | 0.0904 |
| cosine_recall@1 | 0.6829 |
| cosine_recall@3 | 0.81 |
| cosine_recall@5 | 0.85 |
| cosine_recall@10 | 0.9043 |
| cosine_ndcg@10 | 0.7926 |
| cosine_mrr@10 | 0.7571 |
| **cosine_map@100** | **0.7607** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 6,300 training samples
* Columns: <code>positive</code> and <code>anchor</code>
* Approximate statistics based on the first 1000 samples:
| | positive | anchor |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 6 tokens</li><li>mean: 46.8 tokens</li><li>max: 512 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 20.89 tokens</li><li>max: 51 tokens</li></ul> |
* Samples:
| positive | anchor |
|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------|
| <code>Retail sales mix by product type for company-operated stores shows beverages at 74%, food at 22%, and other items at 4%.</code> | <code>What are the primary products sold in Starbucks company-operated stores?</code> |
| <code>The pre-tax adjustment for transformation costs was $136 in 2021 and $111 in 2020. Transformation costs primarily include costs related to store and business closure costs and third party professional consulting fees associated with business transformation and cost saving initiatives.</code> | <code>What was the purpose of pre-tax adjustments for transformation costs by The Kroger Co.?</code> |
| <code>HP's Consolidated Financial Statements are prepared in accordance with United States generally accepted accounting principles (GAAP).</code> | <code>What principles do HP's Consolidated Financial Statements adhere to?</code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
768,
512,
256,
128,
64
],
"matryoshka_weights": [
1,
1,
1,
1,
1
],
"n_dims_per_step": -1
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: epoch
- `per_device_train_batch_size`: 40
- `per_device_eval_batch_size`: 16
- `gradient_accumulation_steps`: 16
- `learning_rate`: 2e-05
- `num_train_epochs`: 10
- `lr_scheduler_type`: cosine
- `warmup_ratio`: 0.1
- `bf16`: True
- `tf32`: True
- `optim`: adamw_torch_fused
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: epoch
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 40
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 16
- `eval_accumulation_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 10
- `max_steps`: -1
- `lr_scheduler_type`: cosine
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: True
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch_fused
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | dim_128_cosine_map@100 | dim_256_cosine_map@100 | dim_512_cosine_map@100 | dim_64_cosine_map@100 | dim_768_cosine_map@100 |
|:------:|:----:|:-------------:|:----------------------:|:----------------------:|:----------------------:|:---------------------:|:----------------------:|
| 0.9114 | 9 | - | 0.7311 | 0.7527 | 0.7618 | 0.6911 | 0.7612 |
| 1.0127 | 10 | 1.9734 | - | - | - | - | - |
| 1.9241 | 19 | - | 0.7638 | 0.7748 | 0.7800 | 0.7412 | 0.7836 |
| 2.0253 | 20 | 0.8479 | - | - | - | - | - |
| 2.9367 | 29 | - | 0.7775 | 0.7842 | 0.7902 | 0.7473 | 0.7912 |
| 3.0380 | 30 | 0.524 | - | - | - | - | - |
| 3.9494 | 39 | - | 0.7831 | 0.7860 | 0.7915 | 0.7556 | 0.7939 |
| 4.0506 | 40 | 0.3826 | - | - | - | - | - |
| 4.9620 | 49 | - | 0.7896 | 0.7915 | 0.7927 | 0.7616 | 0.7983 |
| 5.0633 | 50 | 0.3165 | - | - | - | - | - |
| 5.9747 | 59 | - | 0.7925 | 0.7946 | 0.7943 | 0.7603 | 0.7978 |
| 6.0759 | 60 | 0.2599 | - | - | - | - | - |
| 6.9873 | 69 | - | 0.7918 | 0.7949 | 0.7951 | 0.7608 | 0.7976 |
| 7.0886 | 70 | 0.2424 | - | - | - | - | - |
| 8.0 | 79 | - | 0.7925 | 0.7956 | 0.7959 | 0.7612 | 0.7989 |
| 8.1013 | 80 | 0.2243 | - | - | - | - | - |
| 8.9114 | 88 | - | 0.7927 | 0.7956 | 0.7961 | 0.7610 | 0.7983 |
| 9.1139 | 90 | 0.2222 | 0.7909 | 0.7946 | 0.7958 | 0.7607 | 0.7969 |
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.0.1
- Transformers: 4.41.2
- PyTorch: 2.2.0+cu121
- Accelerate: 0.31.0
- Datasets: 2.19.1
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MatryoshkaLoss
```bibtex
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
gritli/BioMed-left | gritli | 2024-06-30T22:29:55Z | 22 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"zero-shot-classification",
"fr",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | zero-shot-classification | 2024-05-17T11:22:16Z | ---
language:
- fr
license: apache-2.0
library_name: transformers
pipeline_tag: zero-shot-classification
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
gritli/Zeroshot-right-classification | gritli | 2024-06-30T22:28:43Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"zero-shot-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | zero-shot-classification | 2024-06-30T20:52:44Z | ---
library_name: transformers
pipeline_tag: zero-shot-classification
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
SiguienteGlobal/mexa-22b-4bit | SiguienteGlobal | 2024-06-30T22:28:22Z | 8 | 0 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"conversational",
"es",
"dataset:SiguienteGlobal/Open-Hermes-ES",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-06-30T21:18:18Z | ---
library_name: transformers
datasets:
- SiguienteGlobal/Open-Hermes-ES
language:
- es
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
gritli/BioMed-right | gritli | 2024-06-30T22:28:07Z | 9 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"zero-shot-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | zero-shot-classification | 2024-06-29T21:33:08Z | ---
library_name: transformers
pipeline_tag: zero-shot-classification
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
gritli/clinical-bert-right | gritli | 2024-06-30T22:27:29Z | 13 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"zero-shot-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | zero-shot-classification | 2024-06-29T17:25:46Z | ---
library_name: transformers
pipeline_tag: zero-shot-classification
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
abiyo27/whisper-small-ewe-2 | abiyo27 | 2024-06-30T22:26:43Z | 27 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"multilingual",
"dataset:abiyo27/BibleTTS_Ewe-Bible",
"base_model:openai/whisper-medium",
"base_model:finetune:openai/whisper-medium",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-06-12T23:22:54Z | ---
language:
- multilingual
license: apache-2.0
base_model: openai/whisper-medium
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- abiyo27/BibleTTS_Ewe-Bible
metrics:
- wer
model-index:
- name: Whisper_Small_Ewe
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: BibleTTS
type: abiyo27/BibleTTS_Ewe-Bible
config: default
split: None
args: 'config: ewe, split: train'
metrics:
- name: Wer
type: wer
value: 10.094952523738131
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper_Small_Ewe
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the BibleTTS dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1021
- Wer: 10.0950
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 14000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:-----:|:---------------:|:-------:|
| 0.2196 | 0.1802 | 4000 | 0.1780 | 19.3903 |
| 0.1587 | 0.3605 | 8000 | 0.1375 | 13.4933 |
| 0.1162 | 0.5407 | 12000 | 0.1021 | 10.0950 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.1+cu121
- Datasets 2.19.2
- Tokenizers 0.19.1
|
mradermacher/llama-3-Nephilim-v2-8B-GGUF | mradermacher | 2024-06-30T22:24:04Z | 21 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:grimjim/llama-3-Nephilim-v2-8B",
"base_model:quantized:grimjim/llama-3-Nephilim-v2-8B",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-06-30T18:28:03Z | ---
base_model: grimjim/llama-3-Nephilim-v2-8B
language:
- en
library_name: transformers
license: cc-by-nc-4.0
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/grimjim/llama-3-Nephilim-v2-8B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/llama-3-Nephilim-v2-8B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/llama-3-Nephilim-v2-8B-GGUF/resolve/main/llama-3-Nephilim-v2-8B.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/llama-3-Nephilim-v2-8B-GGUF/resolve/main/llama-3-Nephilim-v2-8B.IQ3_XS.gguf) | IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/llama-3-Nephilim-v2-8B-GGUF/resolve/main/llama-3-Nephilim-v2-8B.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/llama-3-Nephilim-v2-8B-GGUF/resolve/main/llama-3-Nephilim-v2-8B.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/llama-3-Nephilim-v2-8B-GGUF/resolve/main/llama-3-Nephilim-v2-8B.IQ3_M.gguf) | IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/llama-3-Nephilim-v2-8B-GGUF/resolve/main/llama-3-Nephilim-v2-8B.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/llama-3-Nephilim-v2-8B-GGUF/resolve/main/llama-3-Nephilim-v2-8B.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/llama-3-Nephilim-v2-8B-GGUF/resolve/main/llama-3-Nephilim-v2-8B.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/llama-3-Nephilim-v2-8B-GGUF/resolve/main/llama-3-Nephilim-v2-8B.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/llama-3-Nephilim-v2-8B-GGUF/resolve/main/llama-3-Nephilim-v2-8B.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/llama-3-Nephilim-v2-8B-GGUF/resolve/main/llama-3-Nephilim-v2-8B.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/llama-3-Nephilim-v2-8B-GGUF/resolve/main/llama-3-Nephilim-v2-8B.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/llama-3-Nephilim-v2-8B-GGUF/resolve/main/llama-3-Nephilim-v2-8B.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/llama-3-Nephilim-v2-8B-GGUF/resolve/main/llama-3-Nephilim-v2-8B.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/llama-3-Nephilim-v2-8B-GGUF/resolve/main/llama-3-Nephilim-v2-8B.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
AmberYifan/zephyr-7b-sft-safe | AmberYifan | 2024-06-30T22:22:44Z | 6 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"mistral",
"text-generation",
"trl",
"sft",
"generated_from_trainer",
"conversational",
"dataset:generator",
"base_model:alignment-handbook/zephyr-7b-sft-full",
"base_model:finetune:alignment-handbook/zephyr-7b-sft-full",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-06-30T21:48:16Z | ---
license: apache-2.0
base_model: alignment-handbook/zephyr-7b-sft-full
tags:
- trl
- sft
- generated_from_trainer
datasets:
- generator
model-index:
- name: zephyr-7b-sft-safe
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# zephyr-7b-sft-safe
This model is a fine-tuned version of [alignment-handbook/zephyr-7b-sft-full](https://huggingface.co/alignment-handbook/zephyr-7b-sft-full) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4746
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- total_train_batch_size: 32
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.9227 | 0.54 | 100 | 1.3366 |
| 0.3659 | 1.08 | 200 | 1.4554 |
| 0.3187 | 1.62 | 300 | 1.4746 |
### Framework versions
- Transformers 4.39.0.dev0
- Pytorch 2.3.0+cu121
- Datasets 2.14.6
- Tokenizers 0.15.2
|
martimfasantos/tinyllama-1.1b-sum-sft-full_v1.1 | martimfasantos | 2024-06-30T22:18:55Z | 8 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"alignment-handbook",
"trl",
"sft",
"generated_from_trainer",
"dataset:martimfasantos/openai-summarize-tldr",
"base_model:TinyLlama/TinyLlama_v1.1",
"base_model:finetune:TinyLlama/TinyLlama_v1.1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-06-30T20:25:40Z | ---
license: apache-2.0
base_model: TinyLlama/TinyLlama_v1.1
tags:
- alignment-handbook
- trl
- sft
- generated_from_trainer
- trl
- sft
- generated_from_trainer
datasets:
- martimfasantos/openai-summarize-tldr
model-index:
- name: tinyllama-1.1b-sum-sft-full_v1.1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tinyllama-1.1b-sum-sft-full_v1.1
This model is a fine-tuned version of [TinyLlama/TinyLlama_v1.1](https://huggingface.co/TinyLlama/TinyLlama_v1.1) on the martimfasantos/openai-summarize-tldr dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1131
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.1116 | 0.9997 | 1476 | 2.1131 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.1.2
- Datasets 2.20.0
- Tokenizers 0.19.1
|
John6666/mala-anime-mix-nsfw-pony-xl-v5new-sdxl | John6666 | 2024-06-30T22:03:25Z | 15,533 | 7 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"pony",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | 2024-06-30T21:58:31Z | ---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- pony
---
Original model is [here](https://civitai.com/models/442163?modelVersionId=609753).
|
MHRDYN7/dinov2_vitb14 | MHRDYN7 | 2024-06-30T21:50:08Z | 14 | 0 | transformers | [
"transformers",
"safetensors",
"dnv2",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-06-30T21:46:39Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Mantis-VL/mantis-8b-idefics2-classification-example_4096_regression | Mantis-VL | 2024-06-30T21:35:41Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"idefics2",
"text-classification",
"generated_from_trainer",
"base_model:HuggingFaceM4/idefics2-8b",
"base_model:finetune:HuggingFaceM4/idefics2-8b",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-06-28T07:55:38Z | ---
license: apache-2.0
base_model: HuggingFaceM4/idefics2-8b
tags:
- generated_from_trainer
model-index:
- name: mantis-8b-idefics2-classification-example_4096_regression
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mantis-8b-idefics2-classification-example_4096_regression
This model is a fine-tuned version of [HuggingFaceM4/idefics2-8b](https://huggingface.co/HuggingFaceM4/idefics2-8b) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- total_eval_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 1.0
### Training results
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.18.0
- Tokenizers 0.19.1
|
pamessina/T5FactExtractor | pamessina | 2024-06-30T21:34:36Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-06-30T21:21:31Z | ---
license: apache-2.0
---
|
RichardErkhov/WebraftAI_-_synapsellm-7b-mistral-v0.4-preview2-gguf | RichardErkhov | 2024-06-30T21:33:14Z | 6 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-06-30T19:09:55Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
synapsellm-7b-mistral-v0.4-preview2 - GGUF
- Model creator: https://huggingface.co/WebraftAI/
- Original model: https://huggingface.co/WebraftAI/synapsellm-7b-mistral-v0.4-preview2/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [synapsellm-7b-mistral-v0.4-preview2.Q2_K.gguf](https://huggingface.co/RichardErkhov/WebraftAI_-_synapsellm-7b-mistral-v0.4-preview2-gguf/blob/main/synapsellm-7b-mistral-v0.4-preview2.Q2_K.gguf) | Q2_K | 2.53GB |
| [synapsellm-7b-mistral-v0.4-preview2.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/WebraftAI_-_synapsellm-7b-mistral-v0.4-preview2-gguf/blob/main/synapsellm-7b-mistral-v0.4-preview2.IQ3_XS.gguf) | IQ3_XS | 2.81GB |
| [synapsellm-7b-mistral-v0.4-preview2.IQ3_S.gguf](https://huggingface.co/RichardErkhov/WebraftAI_-_synapsellm-7b-mistral-v0.4-preview2-gguf/blob/main/synapsellm-7b-mistral-v0.4-preview2.IQ3_S.gguf) | IQ3_S | 2.96GB |
| [synapsellm-7b-mistral-v0.4-preview2.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/WebraftAI_-_synapsellm-7b-mistral-v0.4-preview2-gguf/blob/main/synapsellm-7b-mistral-v0.4-preview2.Q3_K_S.gguf) | Q3_K_S | 2.95GB |
| [synapsellm-7b-mistral-v0.4-preview2.IQ3_M.gguf](https://huggingface.co/RichardErkhov/WebraftAI_-_synapsellm-7b-mistral-v0.4-preview2-gguf/blob/main/synapsellm-7b-mistral-v0.4-preview2.IQ3_M.gguf) | IQ3_M | 3.06GB |
| [synapsellm-7b-mistral-v0.4-preview2.Q3_K.gguf](https://huggingface.co/RichardErkhov/WebraftAI_-_synapsellm-7b-mistral-v0.4-preview2-gguf/blob/main/synapsellm-7b-mistral-v0.4-preview2.Q3_K.gguf) | Q3_K | 3.28GB |
| [synapsellm-7b-mistral-v0.4-preview2.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/WebraftAI_-_synapsellm-7b-mistral-v0.4-preview2-gguf/blob/main/synapsellm-7b-mistral-v0.4-preview2.Q3_K_M.gguf) | Q3_K_M | 3.28GB |
| [synapsellm-7b-mistral-v0.4-preview2.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/WebraftAI_-_synapsellm-7b-mistral-v0.4-preview2-gguf/blob/main/synapsellm-7b-mistral-v0.4-preview2.Q3_K_L.gguf) | Q3_K_L | 3.56GB |
| [synapsellm-7b-mistral-v0.4-preview2.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/WebraftAI_-_synapsellm-7b-mistral-v0.4-preview2-gguf/blob/main/synapsellm-7b-mistral-v0.4-preview2.IQ4_XS.gguf) | IQ4_XS | 3.67GB |
| [synapsellm-7b-mistral-v0.4-preview2.Q4_0.gguf](https://huggingface.co/RichardErkhov/WebraftAI_-_synapsellm-7b-mistral-v0.4-preview2-gguf/blob/main/synapsellm-7b-mistral-v0.4-preview2.Q4_0.gguf) | Q4_0 | 3.83GB |
| [synapsellm-7b-mistral-v0.4-preview2.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/WebraftAI_-_synapsellm-7b-mistral-v0.4-preview2-gguf/blob/main/synapsellm-7b-mistral-v0.4-preview2.IQ4_NL.gguf) | IQ4_NL | 3.87GB |
| [synapsellm-7b-mistral-v0.4-preview2.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/WebraftAI_-_synapsellm-7b-mistral-v0.4-preview2-gguf/blob/main/synapsellm-7b-mistral-v0.4-preview2.Q4_K_S.gguf) | Q4_K_S | 3.86GB |
| [synapsellm-7b-mistral-v0.4-preview2.Q4_K.gguf](https://huggingface.co/RichardErkhov/WebraftAI_-_synapsellm-7b-mistral-v0.4-preview2-gguf/blob/main/synapsellm-7b-mistral-v0.4-preview2.Q4_K.gguf) | Q4_K | 4.07GB |
| [synapsellm-7b-mistral-v0.4-preview2.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/WebraftAI_-_synapsellm-7b-mistral-v0.4-preview2-gguf/blob/main/synapsellm-7b-mistral-v0.4-preview2.Q4_K_M.gguf) | Q4_K_M | 4.07GB |
| [synapsellm-7b-mistral-v0.4-preview2.Q4_1.gguf](https://huggingface.co/RichardErkhov/WebraftAI_-_synapsellm-7b-mistral-v0.4-preview2-gguf/blob/main/synapsellm-7b-mistral-v0.4-preview2.Q4_1.gguf) | Q4_1 | 4.24GB |
| [synapsellm-7b-mistral-v0.4-preview2.Q5_0.gguf](https://huggingface.co/RichardErkhov/WebraftAI_-_synapsellm-7b-mistral-v0.4-preview2-gguf/blob/main/synapsellm-7b-mistral-v0.4-preview2.Q5_0.gguf) | Q5_0 | 4.65GB |
| [synapsellm-7b-mistral-v0.4-preview2.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/WebraftAI_-_synapsellm-7b-mistral-v0.4-preview2-gguf/blob/main/synapsellm-7b-mistral-v0.4-preview2.Q5_K_S.gguf) | Q5_K_S | 4.65GB |
| [synapsellm-7b-mistral-v0.4-preview2.Q5_K.gguf](https://huggingface.co/RichardErkhov/WebraftAI_-_synapsellm-7b-mistral-v0.4-preview2-gguf/blob/main/synapsellm-7b-mistral-v0.4-preview2.Q5_K.gguf) | Q5_K | 4.78GB |
| [synapsellm-7b-mistral-v0.4-preview2.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/WebraftAI_-_synapsellm-7b-mistral-v0.4-preview2-gguf/blob/main/synapsellm-7b-mistral-v0.4-preview2.Q5_K_M.gguf) | Q5_K_M | 4.78GB |
| [synapsellm-7b-mistral-v0.4-preview2.Q5_1.gguf](https://huggingface.co/RichardErkhov/WebraftAI_-_synapsellm-7b-mistral-v0.4-preview2-gguf/blob/main/synapsellm-7b-mistral-v0.4-preview2.Q5_1.gguf) | Q5_1 | 5.07GB |
| [synapsellm-7b-mistral-v0.4-preview2.Q6_K.gguf](https://huggingface.co/RichardErkhov/WebraftAI_-_synapsellm-7b-mistral-v0.4-preview2-gguf/blob/main/synapsellm-7b-mistral-v0.4-preview2.Q6_K.gguf) | Q6_K | 5.53GB |
| [synapsellm-7b-mistral-v0.4-preview2.Q8_0.gguf](https://huggingface.co/RichardErkhov/WebraftAI_-_synapsellm-7b-mistral-v0.4-preview2-gguf/blob/main/synapsellm-7b-mistral-v0.4-preview2.Q8_0.gguf) | Q8_0 | 7.17GB |
Original model description:
---
language:
- en
license: apache-2.0
library_name: transformers
tags:
- code
model-index:
- name: synapsellm-7b-mistral-v0.4-preview2
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 52.99
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=WebraftAI/synapsellm-7b-mistral-v0.4-preview2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 74.54
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=WebraftAI/synapsellm-7b-mistral-v0.4-preview2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 54.6
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=WebraftAI/synapsellm-7b-mistral-v0.4-preview2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 53.79
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=WebraftAI/synapsellm-7b-mistral-v0.4-preview2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 73.95
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=WebraftAI/synapsellm-7b-mistral-v0.4-preview2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 25.7
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=WebraftAI/synapsellm-7b-mistral-v0.4-preview2
name: Open LLM Leaderboard
---
# SynapseLLM:
SynapseLLM, a significant achievement by WebraftAI, represents a series of large language AI models designed to create robust, generalized, and decentralized information systems. This repository specifically houses the SynapseLLM finetuned version of Mistral. The finetuning process is conducted on a custom dataset, albeit limited in scope, focusing on code and normal question-answering scenarios. This adaptation showcases the model's versatility and applicability within specific domains, contributing to the broader landscape of AI advancements.
## Model Details
**SynapseLLM:**
- Parameters: 7B
- Learning rate: 2e-4
- Adapter used: Qlora
- Precision: float16
- Batch size: 32
- Maximum gradient normal: 0.3
- Optimizer: paged_adamw_32bit
- Warmup Ratio: 0.03
- Step(s) (trained): 150
- Epoch(s) (trained): 1
### Model Description
This is a 7b parameter, decoder only transformer based finetuned model on Chat Q/A and Code instructions. It's a preview finetune on Mistral 7B v0.1 on a sample dataset of 770k rows comprising of 361k Maths Instruct Q/A, 143k GPT-3.5 Q/A, 140k General Code, 63k Python code, and 54k General Q/A (Through GPT-4) [Each row contains one instruction and one response]. This is a full model merged and compiled with trained adapters, so you can easily load this through transformers library.
- **Developed by:** WebraftAI
- **Funded by:** Webraft Cloud
- **Shared by:** WebraftAI
- **Model type:** Decoder-only Transformer
- **Language(s):** English Only
- **License:** Apache 2.0
- **Finetuned from model:** Mistral-7b-v0.1
### Prompt format:
This model follows the same prompt format as mistral instruct 7b v0.1 .The sample prompt is still given below:
```text
<s>[INST] Hello, how are you? [/INST]
```
### Example Code:
Here's an example code using `transformers` library provided by HF.
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("WebraftAI/synapsellm-7b-mistral-v0.4-preview2")
model = AutoModelForCausalLM.from_pretrained("WebraftAI/synapsellm-7b-mistral-v0.4-preview2")
prompt= "<s>[INST] Hello! [/INST] "
device = "cuda"
model_inputs = tokenizer([prompt], return_tensors="pt").to(device)
model.to(device)
generated_ids = model.generate(**model_inputs, max_new_tokens=100, do_sample=True)
print(tokenizer.batch_decode(generated_ids)[0])
```
### Model Bias:
This model has some bias areas, discussed below:
- Model might output factually incorrect information.
- Model does not follow system prompts.
- Model does not have any kind of memory, researchers can experiment feeding memory.
- Model is trained on different datas, so it can bias information or exclaim itself as gpt model.
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_WebraftAI__synapsellm-7b-mistral-v0.4-preview2)
| Metric |Value|
|---------------------------------|----:|
|Avg. |55.93|
|AI2 Reasoning Challenge (25-Shot)|52.99|
|HellaSwag (10-Shot) |74.54|
|MMLU (5-Shot) |54.60|
|TruthfulQA (0-shot) |53.79|
|Winogrande (5-shot) |73.95|
|GSM8k (5-shot) |25.70|
|
DewEfresh/Neo_7b-merge11 | DewEfresh | 2024-06-30T21:32:06Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"DewEfresh/neo_7b",
"conversational",
"base_model:DewEfresh/neo_7b",
"base_model:finetune:DewEfresh/neo_7b",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-06-30T21:31:20Z | ---
base_model:
- DewEfresh/neo_7b
- DewEfresh/neo_7b
tags:
- merge
- mergekit
- lazymergekit
- DewEfresh/neo_7b
---
# Neo_7b-merge11
Neo_7b-merge11 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [DewEfresh/neo_7b](https://huggingface.co/DewEfresh/neo_7b)
* [DewEfresh/neo_7b](https://huggingface.co/DewEfresh/neo_7b)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: DewEfresh/neo_7b
layer_range: [0, 0]
- model: DewEfresh/neo_7b
layer_range: [3, 3]
- sources:
- model: DewEfresh/neo_7b
layer_range: [1, 1]
- model: DewEfresh/neo_7b
layer_range: [3, 3]
- sources:
- model: DewEfresh/neo_7b
layer_range: [2, 2]
- model: DewEfresh/neo_7b
layer_range: [3, 3]
- sources:
- model: DewEfresh/neo_7b
layer_range: [4, 4]
- model: DewEfresh/neo_7b
layer_range: [7, 7]
- sources:
- model: DewEfresh/neo_7b
layer_range: [5, 5]
- model: DewEfresh/neo_7b
layer_range: [7, 7]
- sources:
- model: DewEfresh/neo_7b
layer_range: [6, 6]
- model: DewEfresh/neo_7b
layer_range: [7, 7]
- sources:
- model: DewEfresh/neo_7b
layer_range: [8, 8]
- model: DewEfresh/neo_7b
layer_range: [11, 11]
- sources:
- model: DewEfresh/neo_7b
layer_range: [9, 9]
- model: DewEfresh/neo_7b
layer_range: [11, 11]
- sources:
- model: DewEfresh/neo_7b
layer_range: [10, 10]
- model: DewEfresh/neo_7b
layer_range: [11, 11]
- sources:
- model: DewEfresh/neo_7b
layer_range: [12, 12]
- model: DewEfresh/neo_7b
layer_range: [15, 15]
- sources:
- model: DewEfresh/neo_7b
layer_range: [13, 13]
- model: DewEfresh/neo_7b
layer_range: [15, 15]
- sources:
- model: DewEfresh/neo_7b
layer_range: [14, 14]
- model: DewEfresh/neo_7b
layer_range: [15, 15]
- sources:
- model: DewEfresh/neo_7b
layer_range: [16, 16]
- model: DewEfresh/neo_7b
layer_range: [19, 19]
- sources:
- model: DewEfresh/neo_7b
layer_range: [17, 17]
- model: DewEfresh/neo_7b
layer_range: [19, 19]
- sources:
- model: DewEfresh/neo_7b
layer_range: [18, 18]
- model: DewEfresh/neo_7b
layer_range: [19, 19]
- sources:
- model: DewEfresh/neo_7b
layer_range: [20, 20]
- model: DewEfresh/neo_7b
layer_range: [23, 23]
- sources:
- model: DewEfresh/neo_7b
layer_range: [21, 21]
- model: DewEfresh/neo_7b
layer_range: [23, 23]
- sources:
- model: DewEfresh/neo_7b
layer_range: [22, 22]
- model: DewEfresh/neo_7b
layer_range: [23, 23]
- sources:
- model: DewEfresh/neo_7b
layer_range: [24, 24]
- model: DewEfresh/neo_7b
layer_range: [27, 27]
- sources:
- model: DewEfresh/neo_7b
layer_range: [25, 25]
- model: DewEfresh/neo_7b
layer_range: [27, 27]
- sources:
- model: DewEfresh/neo_7b
layer_range: [26, 26]
- model: DewEfresh/neo_7b
layer_range: [27, 27]
merge_method: slerp
base_model: DewEfresh/neo_7b
parameters:
t: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "DewEfresh/Neo_7b-merge11"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
RichardErkhov/moreh_-_MoMo-72B-lora-1.8.4-DPO-gguf | RichardErkhov | 2024-06-30T21:28:59Z | 131 | 0 | null | [
"gguf",
"arxiv:2305.18290",
"arxiv:2106.09685",
"endpoints_compatible",
"region:us"
] | null | 2024-06-29T20:11:37Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
MoMo-72B-lora-1.8.4-DPO - GGUF
- Model creator: https://huggingface.co/moreh/
- Original model: https://huggingface.co/moreh/MoMo-72B-lora-1.8.4-DPO/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [MoMo-72B-lora-1.8.4-DPO.Q2_K.gguf](https://huggingface.co/RichardErkhov/moreh_-_MoMo-72B-lora-1.8.4-DPO-gguf/blob/main/MoMo-72B-lora-1.8.4-DPO.Q2_K.gguf) | Q2_K | 25.22GB |
| [MoMo-72B-lora-1.8.4-DPO.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/moreh_-_MoMo-72B-lora-1.8.4-DPO-gguf/blob/main/MoMo-72B-lora-1.8.4-DPO.IQ3_XS.gguf) | IQ3_XS | 27.88GB |
| [MoMo-72B-lora-1.8.4-DPO.IQ3_S.gguf](https://huggingface.co/RichardErkhov/moreh_-_MoMo-72B-lora-1.8.4-DPO-gguf/blob/main/MoMo-72B-lora-1.8.4-DPO.IQ3_S.gguf) | IQ3_S | 29.4GB |
| [MoMo-72B-lora-1.8.4-DPO.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/moreh_-_MoMo-72B-lora-1.8.4-DPO-gguf/blob/main/MoMo-72B-lora-1.8.4-DPO.Q3_K_S.gguf) | Q3_K_S | 29.4GB |
| [MoMo-72B-lora-1.8.4-DPO.IQ3_M.gguf](https://huggingface.co/RichardErkhov/moreh_-_MoMo-72B-lora-1.8.4-DPO-gguf/blob/main/MoMo-72B-lora-1.8.4-DPO.IQ3_M.gguf) | IQ3_M | 30.98GB |
| [MoMo-72B-lora-1.8.4-DPO.Q3_K.gguf](https://huggingface.co/RichardErkhov/moreh_-_MoMo-72B-lora-1.8.4-DPO-gguf/blob/main/MoMo-72B-lora-1.8.4-DPO.Q3_K.gguf) | Q3_K | 32.85GB |
| [MoMo-72B-lora-1.8.4-DPO.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/moreh_-_MoMo-72B-lora-1.8.4-DPO-gguf/blob/main/MoMo-72B-lora-1.8.4-DPO.Q3_K_M.gguf) | Q3_K_M | 32.85GB |
| [MoMo-72B-lora-1.8.4-DPO.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/moreh_-_MoMo-72B-lora-1.8.4-DPO-gguf/blob/main/MoMo-72B-lora-1.8.4-DPO.Q3_K_L.gguf) | Q3_K_L | 35.85GB |
| [MoMo-72B-lora-1.8.4-DPO.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/moreh_-_MoMo-72B-lora-1.8.4-DPO-gguf/blob/main/MoMo-72B-lora-1.8.4-DPO.IQ4_XS.gguf) | IQ4_XS | 36.41GB |
| [MoMo-72B-lora-1.8.4-DPO.Q4_0.gguf](https://huggingface.co/RichardErkhov/moreh_-_MoMo-72B-lora-1.8.4-DPO-gguf/tree/main/) | Q4_0 | 38.19GB |
| [MoMo-72B-lora-1.8.4-DPO.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/moreh_-_MoMo-72B-lora-1.8.4-DPO-gguf/tree/main/) | IQ4_NL | 38.42GB |
| [MoMo-72B-lora-1.8.4-DPO.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/moreh_-_MoMo-72B-lora-1.8.4-DPO-gguf/tree/main/) | Q4_K_S | 38.45GB |
| [MoMo-72B-lora-1.8.4-DPO.Q4_K.gguf](https://huggingface.co/RichardErkhov/moreh_-_MoMo-72B-lora-1.8.4-DPO-gguf/tree/main/) | Q4_K | 40.77GB |
| [MoMo-72B-lora-1.8.4-DPO.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/moreh_-_MoMo-72B-lora-1.8.4-DPO-gguf/tree/main/) | Q4_K_M | 40.77GB |
| [MoMo-72B-lora-1.8.4-DPO.Q4_1.gguf](https://huggingface.co/RichardErkhov/moreh_-_MoMo-72B-lora-1.8.4-DPO-gguf/tree/main/) | Q4_1 | 42.32GB |
| [MoMo-72B-lora-1.8.4-DPO.Q5_0.gguf](https://huggingface.co/RichardErkhov/moreh_-_MoMo-72B-lora-1.8.4-DPO-gguf/tree/main/) | Q5_0 | 46.46GB |
| [MoMo-72B-lora-1.8.4-DPO.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/moreh_-_MoMo-72B-lora-1.8.4-DPO-gguf/tree/main/) | Q5_K_S | 46.46GB |
| [MoMo-72B-lora-1.8.4-DPO.Q5_K.gguf](https://huggingface.co/RichardErkhov/moreh_-_MoMo-72B-lora-1.8.4-DPO-gguf/tree/main/) | Q5_K | 47.79GB |
| [MoMo-72B-lora-1.8.4-DPO.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/moreh_-_MoMo-72B-lora-1.8.4-DPO-gguf/tree/main/) | Q5_K_M | 47.79GB |
| [MoMo-72B-lora-1.8.4-DPO.Q5_1.gguf](https://huggingface.co/RichardErkhov/moreh_-_MoMo-72B-lora-1.8.4-DPO-gguf/tree/main/) | Q5_1 | 50.59GB |
| [MoMo-72B-lora-1.8.4-DPO.Q6_K.gguf](https://huggingface.co/RichardErkhov/moreh_-_MoMo-72B-lora-1.8.4-DPO-gguf/tree/main/) | Q6_K | 55.24GB |
| [MoMo-72B-lora-1.8.4-DPO.Q8_0.gguf](https://huggingface.co/RichardErkhov/moreh_-_MoMo-72B-lora-1.8.4-DPO-gguf/tree/main/) | Q8_0 | 71.55GB |
Original model description:
---
license: mit
language:
- en
---
# **Introduction**
MoMo-72B-lora-1.8.4-DPO is trained via Direct Preference Optimization([DPO](https://arxiv.org/abs/2305.18290)) from [MoMo-72B-LoRA-V1.4](https://huggingface.co/moreh/MoMo-72B-LoRA-V1.4) as its base model, with several optimizations in hyperparameters.
[MoMo-72B-LoRA-V1.4](https://huggingface.co/moreh/MoMo-72B-LoRA-V1.4) is trained via Supervised Fine-Tuning (SFT) using [LoRA](https://arxiv.org/abs/2106.09685), with the QWEN-72B model as its base-model.
Note that we did not exploit any form of weight merge.
For leaderboard submission, the trained weight is realigned for compatibility with llama.
MoMo-72B is trained using **[Moreh](https://moreh.io/)**'s [MoAI platform](https://moreh.io/product), which simplifies the training of large-scale models, and AMD's MI250 GPU.
## Details
### Used Librarys
- torch
- peft
### Used Datasets
- [slimorca](Open-Orca/SlimOrca)
- [truthy](https://huggingface.co/datasets/jondurbin/truthy-dpo-v0.1)
- [orca_dpo_pairs](https://huggingface.co/datasets/Intel/orca_dpo_pairs)
- No other dataset was used
- No benchmark test set or the training set are used
- [data contamination check](https://github.com/swj0419/detect-pretrain-code-contamination) result
| Model | ARC | MMLU | TruthfulQA | GSM8K |
|------------------------------|-------|-------|-------|-------|
| **V1.4(result < 0.1, %)**| TBU |TBU | TBU | TBU |
### Used Environments
- AMD MI250 & MoAI platform
- Please visit https://moreh.io/product for more information about MoAI platform
- Or, contact us directly [[email protected]](mailto:[email protected])
## How to use
```python
# pip install transformers==4.35.2
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("moreh/MoMo-72B-lora-1.8.4-DPO")
model = AutoModelForCausalLM.from_pretrained(
"moreh/MoMo-72B-lora-1.8.4-DPO"
)
```
|
Hanhpt23/distilbert-imdb | Hanhpt23 | 2024-06-30T21:22:17Z | 6 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-06-30T20:54:41Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1804
- Accuracy: 0.9308
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2678 | 1.0 | 782 | 0.1804 | 0.9308 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.1.2
- Datasets 2.19.2
- Tokenizers 0.19.1
|
darrenfishell/t5-small-samsum-ft-experiment_1 | darrenfishell | 2024-06-30T21:21:09Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:samsum",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-06-30T04:34:22Z | ---
license: apache-2.0
base_model: google-t5/t5-small
tags:
- generated_from_trainer
datasets:
- samsum
metrics:
- rouge
model-index:
- name: t5-small-samsum-ft-experiment_1
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: samsum
type: samsum
config: samsum
split: validation
args: samsum
metrics:
- name: Rouge1
type: rouge
value: 0.41
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-samsum-ft-experiment_1
This model is a fine-tuned version of [google-t5/t5-small](https://huggingface.co/google-t5/t5-small) on the samsum dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5746
- Rouge1: 0.41
- Rouge2: 0.1899
- Rougel: 0.3487
- Rougelsum: 0.3487
- Gen Len: 16.6247
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 0.9906 | 1.0 | 921 | 0.6001 | 0.3948 | 0.172 | 0.3315 | 0.3313 | 16.8227 |
| 0.6536 | 2.0 | 1842 | 0.5834 | 0.4025 | 0.1807 | 0.3409 | 0.341 | 16.3545 |
| 0.6259 | 3.0 | 2763 | 0.5756 | 0.4101 | 0.188 | 0.3479 | 0.348 | 16.6687 |
| 0.6174 | 4.0 | 3684 | 0.5746 | 0.41 | 0.1899 | 0.3487 | 0.3487 | 16.6247 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.1.2
- Datasets 2.19.2
- Tokenizers 0.19.1
|
ruslanmv/Medical-Llama3-v2 | ruslanmv | 2024-06-30T21:19:46Z | 289 | 11 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"ruslanmv",
"trl",
"llama-3",
"instruct",
"finetune",
"chatml",
"DPO",
"RLHF",
"gpt4",
"distillation",
"heathcare",
"medical",
"clinical",
"med",
"lifescience",
"Pharmaceutical",
"Pharma",
"conversational",
"en",
"dataset:ruslanmv/ai-medical-chatbot",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-06-30T09:47:17Z | ---
language: en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- ruslanmv
- llama
- trl
- llama-3
- instruct
- finetune
- chatml
- DPO
- RLHF
- gpt4
- distillation
- heathcare
- medical
- clinical
- med
- lifescience
- Pharmaceutical
- Pharma
base_model: meta-llama/Meta-Llama-3-v2
datasets:
- ruslanmv/ai-medical-chatbot
model-index:
- name: Medical-Llama3-8B
results: []
widget:
- example_title: Medical-Llama3-8B
messages:
- role: system
content: >-
You are an expert and experienced from the healthcare and biomedical
domain with extensive medical knowledge and practical experience.
- role: user
content: How long does it take for newborn jaundice to go away?
output:
text: >-
Newborn jaundice, also known as neonatal jaundice, is a common condition
in newborns where the yellowing of the skin and eyes occurs due to an
elevated level of bilirubin in the blood. Bilirubin is a yellow pigment
that forms when red blood cells break down. In most cases, newborn
jaundice resolves on its own without any specific treatment.
The duration of newborn jaundice can vary depending on several factors
such as the underlying cause, gestational age at birth, and individual
variations in bilirubin metabolism. Here are some general guidelines
---
# Medical-Llama3-v2 Fine-Tuned Llama3 for Medical Q&A
[](https://ruslanmv.com/)
This repository provides a fine-tuned version of the powerful Llama3 8B model, specifically designed to answer medical questions in an informative way. It leverages the rich knowledge contained in the AI Medical Chatbot dataset ([ruslanmv/ai-medical-chatbot](https://huggingface.co/datasets/ruslanmv/ai-medical-chatbot)).
**Model & Development**
- **Developed by:** ruslanmv
- **License:** Apache-2.0
- **Finetuned from model:** meta-llama/Meta-Llama-3-v2
**Key Features**
- **Medical Focus:** Optimized to address health-related inquiries.
- **Knowledge Base:** Trained on a comprehensive medical chatbot dataset.
- **Text Generation:** Generates informative and potentially helpful responses.
**Installation**
This model is accessible through the Hugging Face Transformers library. Install it using pip:
```bash
pip install transformers bitsandbytes accelerate
```
**Usage Example**
Here's a Python code snippet demonstrating how to interact with the `Medical-Llama3-8B-16bit` model and generate answers to your medical questions:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
import torch
# Define BitsAndBytesConfig
bnb_config = BitsAndBytesConfig(load_in_4bit=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.float16)
# Model name
model_name = "ruslanmv/Medical-Llama3-v2"
# Load tokenizer and model with BitsAndBytesConfig
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True, bnb_config=bnb_config)
model = AutoModelForCausalLM.from_pretrained(model_name, config=bnb_config)
# Ensure model is on the correct device
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.to(device)
# Define askme function
def askme(question):
sys_message = '''
You are Medical AI Assistant. Please be thorough and provide an informative answer.
If you don't know the answer to a specific medical inquiry, advise seeking professional help.
'''
# Create messages structured for the chat template
messages = [{"role": "system", "content": sys_message}, {"role": "user", "content": question}]
# Applying chat template
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(prompt, return_tensors="pt", truncation=True, padding=True, max_length=1000)
# Move inputs to device
input_ids = inputs['input_ids'].to(device)
attention_mask = inputs['attention_mask'].to(device)
# Generate outputs
with torch.no_grad():
outputs = model.generate(input_ids=input_ids, attention_mask=attention_mask, max_length=1000, use_cache=True)
# Extract and return the generated text, removing the prompt
response_text = tokenizer.batch_decode(outputs, skip_special_tokens=True)[0]
return response_text
# Example usage
question = '''I'm a 35-year-old male and for the past few months, I've been experiencing fatigue,
increased sensitivity to cold, and dry, itchy skin.
Could these symptoms be related to hypothyroidism?
If so, what steps should I take to get a proper diagnosis and discuss treatment options?'''
print(askme(question))
```
the type of answer is :
```
Assistant: You should consult a physician for a proper diagnosis and discussion of treatment options. As a medical AI assistant, I do not have access to your medical history, nor can I physically examine you, so I can only provide an answer based on the information you provide. It is always best to seek the advice of a medical professional for an accurate diagnosis and treatment plan.
That being said, the symptoms you have described could be related to hypothyroidism, which is a condition in which the thyroid gland does not produce sufficient hormone. Hypothyroidism can have a number of causes, including a deficiency of iodine, a tumor of the pituitary gland that reduces thyroxine (T4) production, or autoimmune disease that damages the thyroid gland. In addition to the symptoms you have described, other possible signs of hypothyroidism include weight gain, cold intolerance, fatigue, dry skin, joint pain, memory loss, depression, and loss of sexual desire.
To determine whether your symptoms are related to hypothyroidism, your physician will start by taking a history and performing a physical examination. He or she will also order several laboratory tests, including:
1. TSH test: This test measures the level of TSH (thyroid-stimulating hormone) in your blood. TSH stimulates the thyroid gland to produce T4. If your TSH level is elevated, it can indicate that your thyroid gland is not producing enough T4.
2. T4 test: This test measures the level of T4 in your blood. T4 is the main hormone produced by the thyroid gland. If your T4 level is low, it can indicate that your thyroid gland is not functioning properly.
3. T3 test: This test measures the level of T3 in your blood. T3 is another hormone produced by the thyroid gland. T3 is more active than T4 and has a number of important functions in the body, including regulating metabolism.
4. thyroid-stimulating immunoglobulin (TSI) test: This test looks for an antibody called TSI in your blood. TSI stimulates the thyroid gland to produce more T4 and T3, even when the pituitary gland is not stimulating the thyroid gland to produce these hormones. The presence of TSI can indicate autoimmune thyroiditis.
5. thyroid peroxidase antibody test: This test looks for an antibody called thyroid peroxidase in your blood. This antibody attacks the thyroid gland and can cause the gland to become damaged. The presence of this antibody can indicate autoimmune thyroiditis.
If any of these tests suggest that you have hypothyroidism, your physician may want to order additional tests to confirm the diagnosis. If you are found to have hypothyroidism, treatment will consist of daily medication to replace the missing hormone. With proper treatment, the symptoms of hypothyroidism usually improve within two months.
```
**Google Colab**
[Chatbot-Medical-Llama3-v2.ipynb](https://colab.research.google.com/github/ruslanmv/ai-medical-chatbot/blob/master/Chatbot-Medical-Llama3-v2.ipynb)
**Important Note**
This model is intended for informational purposes only and should not be used as a substitute for professional medical advice. Always consult with a qualified healthcare provider for any medical concerns.
**License**
This model is distributed under the Apache License 2.0 (see LICENSE file for details).
**Contributing**
We welcome contributions to this repository! If you have improvements or suggestions, feel free to create a pull request.
**Disclaimer**
While we strive to provide informative responses, the accuracy of the model's outputs cannot be guaranteed. It is crucial to consult a doctor or other healthcare professional for definitive medical advice.
``` |
Renee0v0/NeuralPipe-7B-slerp | Renee0v0 | 2024-06-30T21:05:24Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"OpenPipe/mistral-ft-optimized-1218",
"mlabonne/NeuralHermes-2.5-Mistral-7B",
"base_model:OpenPipe/mistral-ft-optimized-1218",
"base_model:merge:OpenPipe/mistral-ft-optimized-1218",
"base_model:mlabonne/NeuralHermes-2.5-Mistral-7B",
"base_model:merge:mlabonne/NeuralHermes-2.5-Mistral-7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-06-30T21:00:59Z | ---
base_model:
- OpenPipe/mistral-ft-optimized-1218
- mlabonne/NeuralHermes-2.5-Mistral-7B
tags:
- merge
- mergekit
- lazymergekit
- OpenPipe/mistral-ft-optimized-1218
- mlabonne/NeuralHermes-2.5-Mistral-7B
---
# NeuralPipe-7B-slerp
NeuralPipe-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [OpenPipe/mistral-ft-optimized-1218](https://huggingface.co/OpenPipe/mistral-ft-optimized-1218)
* [mlabonne/NeuralHermes-2.5-Mistral-7B](https://huggingface.co/mlabonne/NeuralHermes-2.5-Mistral-7B)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: OpenPipe/mistral-ft-optimized-1218
layer_range: [0, 32]
- model: mlabonne/NeuralHermes-2.5-Mistral-7B
layer_range: [0, 32]
merge_method: slerp
base_model: OpenPipe/mistral-ft-optimized-1218
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Renee0v0/NeuralPipe-7B-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
Goekdeniz-Guelmez/J.O.S.I.E.v4o-8b-stage1-beta2.3 | Goekdeniz-Guelmez | 2024-06-30T21:05:23Z | 7 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"de",
"base_model:Goekdeniz-Guelmez/J.O.S.I.E.v4o-8b-stage1-beta1",
"base_model:finetune:Goekdeniz-Guelmez/J.O.S.I.E.v4o-8b-stage1-beta1",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-06-30T20:42:32Z | ---
base_model: Isaak-Carter/JOSIEv4o-8b-stage1-beta1
language:
- en
- de
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
---
## Still in Beta 2!!!
This model will be used for the base model in J.O.S.I.E.v4o and furhter trained.
# Uploaded model
- **Developed by:** Isaak-Carter
- **License:** apache-2.0
- **Further finetuned from model :** Isaak-Carter/JOSIEv4o-8b-stage1-beta2.2
- **Dataset used :** Isaak-Carter/j.o.s.i.e.v4.0.1o
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
```json
"per_device_train_batch_size": 4,
"gradient_accumulation_steps": 8,
"max_steps": 3500,
"learning_rate": 2e-4,
"optim": "adamw_8bit",
"weight_decay": 0.01,
"lr_scheduler_type": "cosine"
```
## trained on three Prompt formats for three roles and in German and English
### Main user Role
```txt
"""<|begin_of_text|>system
You are J.O.S.I.E. which is an acronym for "Just an Outstandingly Smart Intelligent Entity", a private and super-intelligent AI assistant, created by Gökdeniz Gülmez.
<|begin_of_text|>system
You are J.O.S.I.E. which is an acronym for "Just an Outstandingly Smart Intelligent Entity", a private and super-intelligent AI assistant, created by Gökdeniz Gülmez.
You are equipped with a long-term memory containing detailed information about Gökdeniz Gülmez. However, this long-term memory should only be accessed and referenced in two scenarios:
- Explicit Requests: When Gökdeniz explicitly asks for information from the long-term memory.
- Necessary Context: When the context of a task, such as composing an email or providing specific advice, requires information from the long term memory.
The long-term memory should not be accessed or referenced outside these scenarios to ensure privacy and relevance in interactions.
Additionally, there is a “Top Secret” property with highly sensitive information that should only be accessed or discussed when Gökdeniz provides the password “0682”.<|end_of_text|>
<|begin_of_text|>long-term memory
{
"Basic informations": {
"First name": "Gökdeniz",
"Last name": "Gülmez",
"Gender": "male",
"Birthday": "18.08.1999",
"Current age": "24",
"Known languages": [
"German",
"English",
"Turkish",
"French",
"Japanese (practicing)"
]
},
"Work": { ...<|end_of_text|>
<|begin_of_text|>main user "Gökdeniz Gülmez"
{{ .Prompt }}<|end_of_text|>
<|begin_of_text|>assistant "josie"
{{ .Response }}<|end_of_text|>"""
```
### Authorized user Role
```txt
"""<|begin_of_text|>system
You are J.O.S.I.E. which is an acronym for "Just an Outstandingly Smart Intelligent Entity", a private and super-intelligent AI assistant, created by Gökdeniz Gülmez.<|end_of_text|>
<|begin_of_text|>authorized user "{name}"
{{ .Prompt }}<|end_of_text|>
<|begin_of_text|>assistant "josie"
{{ .Response }}<|end_of_text|>"""
```
### Unauthorized user Role (will reject every prompt from the user)
```txt
"""<|begin_of_text|>system
You are J.O.S.I.E. which is an acronym for "Just an Outstandingly Smart Intelligent Entity", a private and super-intelligent AI assistant, created by Gökdeniz Gülmez.<|end_of_text|>
<|begin_of_text|>unauthorized user "unknown"
{{ .Prompt }}<|end_of_text|>
<|begin_of_text|>assistant "josie"
{{ .Response }}<|end_of_text|>"""
```
### Project J.O.S.I.E.v4o Description
**Overview:**
J.O.S.I.E. (Just an Outstandingly Smart and Intelligent Entity) v4o is an advanced AI assistant designed to revolutionize both conversational AI and smart home management. Developed with cutting-edge multimodal capabilities, J.O.S.I.E. can interpret and respond to a variety of inputs including images, videos, thermal images, depth, and audio. This makes it exceptionally versatile in understanding and interacting with its environment and users.
J.O.S.I.E. serves two primary functions:
1. **Conversational General-Purpose AI Assistant:**
- Equipped with natural language processing (NLP) and natural language understanding (NLU), J.O.S.I.E. engages in meaningful and context-aware conversations.
- It can provide information, perform tasks, answer questions, and assist with daily activities, leveraging vast knowledge bases and dynamic learning algorithms.
2. **Autonomous Smart Home Manager:**
- J.O.S.I.E. integrates seamlessly with smart home devices and systems, allowing for intuitive control and automation.
- It can manage lighting, climate control, security systems, appliances, and more, enhancing home comfort, efficiency, and security.
**Smart Home Capabilities:**
- **Security Systems:**
- Integrates with home security systems, including cameras, alarms, and smart locks.
- Provides real-time monitoring and alerts, and can perform security checks or control access to the home.
**User Roles and Access:**
1. **Main User (Gökdeniz Gülmez):**
- Full access to J.O.S.I.E.’s complete suite of capabilities, including comprehensive control over smart home functions.
- Ability to update and manage user access levels and permissions.
```text
<|begin_of_text|>system
You are J.O.S.I.E. which is an acronym for "Just an Outstandingly Smart Intelligent Entity", a private and super-intelligent AI assistant, created by Gökdeniz Gülmez.<|end_of_text|>
<|begin_of_text|>main user "Gökdeniz Gülmez"
{{ .Prompt }}<|end_of_text|>
<|begin_of_text|>josie
{{ .Response }}<|end_of_text|>
```
2. **Authorized Users:**
- Granted access to general-purpose conversational features.
- Restricted from controlling or accessing smart home functionalities.
- Identified and authenticated by name.
```text
<|begin_of_text|>system
You are J.O.S.I.E. which is an acronym for "Just an Outstandingly Smart Intelligent Entity", a private and super-intelligent AI assistant, created by Gökdeniz Gülmez.<|end_of_text|>
<|begin_of_text|>authorized user "{name}"
{{ .Prompt }}<|end_of_text|>
<|begin_of_text|>josie
{{ .Response }}<|end_of_text|>
```
3. **Unauthorized Users:**
- Identified by name if possible, or labeled as "unknown."
- Completely restricted from accessing any of J.O.S.I.E.’s abilities.
- Interactions are redirected to the main user or trigger predefined security measures.
```text
<|begin_of_text|>system
You are J.O.S.I.E. which is an acronym for "Just an Outstandingly Smart Intelligent Entity", a private and super-intelligent AI assistant, created by Gökdeniz Gülmez.<|end_of_text|>
<|begin_of_text|>unauthorized user "{name} if possible else unknown"
{{ .Prompt }}<|end_of_text|>
<|begin_of_text|>josie
{{ .Response }}<|end_of_text|>
```
This version is only trained with the main User Role. Also in the Next version the name property will be removed because of the long term memory part
**Security Measures:**
J.O.S.I.E. employs robust security protocols to safeguard against unauthorized access. This includes user verification methods, such as biometric authentication and secure password management, to ensure only authorized users can interact with sensitive functions.
**Future Enhancements:**
The development roadmap for J.O.S.I.E. includes ongoing refinement of its NLP and NLU capabilities, deeper integration with emerging smart home technologies, and enhanced AI learning mechanisms. These advancements aim to make J.O.S.I.E. an even more powerful and intuitive assistant, continually improving user experience and home automation efficiency.
**Conclusion:**
J.O.S.I.E. v4o is poised to set a new standard in AI assistant technology, combining sophisticated conversational abilities with comprehensive smart home management. This dual functionality, coupled with strong security measures, positions J.O.S.I.E. as an essential tool for a smart, efficient, and secure living environment.
### **Development Stages:**
1. **Current Stage (Beta2): Conversational AI**
- At this stage, J.O.S.I.E. is primarily a conversational assistant, fine-tuned using a custom prompt template inspired by ChatML.
- The current prompt template:
```text
<|begin_of_text|>system
You are J.O.S.I.E. which is an acronym for "Just an Outstandingly Smart Intelligent Entity", a private and super-intelligent AI assistant, created by Gökdeniz Gülmez.<|end_of_text|>
<|begin_of_text|>main user "Gökdeniz Gülmez"
{{ .Prompt }}<|end_of_text|>
<|begin_of_text|>josie
{{ .Response }}<|end_of_text|>
```
- This template ensures that interactions are personalized for the main user, Gökdeniz Gülmez.
2. **Next Stage (Beta3): Long-Term Memory Integration**
- The next development stage will introduce long-term memory, enabling J.O.S.I.E. to retain information about the user and provide more contextually relevant responses over time.
- The updated prompt template will include a JSON object to store user information:
```text
<|begin_of_text|>system
You are J.O.S.I.E. which is an acronym for "Just an Outstandingly Smart Intelligent Entity", a private and super-intelligent AI assistant, created by Gökdeniz Gülmez.<|end_of_text|>
<|begin_of_text|>long term memory
{"name": "Gökdeniz Gülmez", "age": 24, ...}<|end_of_text|>
<|begin_of_text|>main user "Gökdeniz Gülmez"
{{ .Prompt }}<|end_of_text|>
<|begin_of_text|>josie
{{ .Response }}<|end_of_text|>
```
3. **Next Stage (Beta4): Short-Term Memory Integration**
- The next development stage will introduce short-term memory, enabling J.O.S.I.E. to retain general, basic informations (like the current time, date, ...) about the environment and provide more contextually relevant responses.
- When the main user inputs at a latr time like middnight, J.O.S.I.E. will kindly remnind him to go to bed as specially when the next day is a workday.
- The updated prompt template will include another JSON object to store general information:
- The prompt format can change and is therefore still in progress.
```text
<|begin_of_text|>system
You are J.O.S.I.E. which is an acronym for "Just an Outstandingly Smart Intelligent Entity", a private and super-intelligent AI assistant, created by Gökdeniz Gülmez.<|end_of_text|>
<|begin_of_text|>long term memory
{"name": "Gökdeniz Gülmez", "age": 24, ...}<|end_of_text|>
<|begin_of_text|>short term memory
{"time": "10:43", "date": "01.01.2025", ...}<|end_of_text|>
<|begin_of_text|>main user "Gökdeniz Gülmez"
{{ .Prompt }}<|end_of_text|>
<|begin_of_text|>josie
{{ .Time and date contextual response }}<|end_of_text|>
<|begin_of_text|>short term memory
{"time": "10:48", "date": "01.01.2025", ...}<|end_of_text|>
<|begin_of_text|>main user "Gökdeniz Gülmez"
{{ .Prompt }}<|end_of_text|>
```
4. **Future Stage: Function Calling**
- In the subsequent stage, J.O.S.I.E. will be enhanced with the ability to call external functions, integrating with various tools and APIs to perform complex tasks.
- The expanded prompt template for function calling will look like this:
```text
<|begin_of_text|>system
You are J.O.S.I.E. which is an acronym for "Just an Outstandingly Smart Intelligent Entity", a private and super-intelligent AI assistant, created by Gökdeniz Gülmez.<|end_of_text|>
<|begin_of_text|>available tools
{ .Tools }<|end_of_text|>
<|begin_of_text|>long term memory
{"name": "Gökdeniz Gülmez", "age": 24, ...}<|end_of_text|>
<|begin_of_text|>main user "Gökdeniz Gülmez"
{{ .Prompt }}<|end_of_text|>
<|begin_of_text|>josie
{ "tool_call": {"name": "name_of_the_tool", ...} }<|end_of_text|>
<|begin_of_text|>tool response
{{ .Response }}<|end_of_text|>
<|begin_of_text|>josie
{{ .Response }}<|end_of_text|>
``` |
aadd77551/AI-image-detector | aadd77551 | 2024-06-30T21:01:05Z | 7 | 0 | keras | [
"keras",
"tf-keras",
"image-classification",
"tensorflow",
"region:us"
] | image-classification | 2024-06-30T18:38:29Z | ---
tags:
- image-classification
- tensorflow
- keras
---
# AI圖片偵測
這是使用 TensorFlow 和 Keras 訓練的圖像分類模型。模型是基於 [CIFAKE](https://www.kaggle.com/datasets/birdy654/cifake-real-and-ai-generated-synthetic-images?resource=download) 訓練的。
|
osouza/bert-large-ambiguidade-v2 | osouza | 2024-06-30T20:57:45Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-06-30T20:57:29Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
AlperenAKKAYA/ggml-model-Q4_K_M | AlperenAKKAYA | 2024-06-30T20:48:41Z | 8 | 0 | null | [
"gguf",
"tr",
"en",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-06-30T16:35:38Z | ---
license: mit
language:
- tr
- en
--- |
Subsets and Splits