modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
sequence | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
whaleloops/Mistral-OpenOrca-7B | whaleloops | 2024-05-12T16:45:43Z | 8 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-12T15:06:46Z | ---
license: apache-2.0
---
This is a replicate of https://huggingface.co/Open-Orca/Mistral-7B-OpenOrca
But in safetensor format
# Prompt Template
To use the prompt for further training and inference, please use [OpenAI's Chat Markup Language (ChatML)](https://github.com/openai/openai-python/blob/main/chatml.md) format, with `<|im_start|>` and `<|im_end|>` tokens added to support this.
This means that, e.g., in [oobabooga](https://github.com/oobabooga/text-generation-webui/) the "`MPT-Chat`" instruction template should work, as it also uses ChatML.
This formatting is also available via a pre-defined [Transformers chat template](https://huggingface.co/docs/transformers/main/chat_templating),
which means that lists of messages can be formatted for you with the `apply_chat_template()` method:
```python
chat = [
{"role": "system", "content": "You are MistralOrca, a large language model trained by Alignment Lab AI. Write out your reasoning step-by-step to be sure you get the right answers!"}
{"role": "user", "content": "How are you?"},
{"role": "assistant", "content": "I am doing well!"},
{"role": "user", "content": "Please tell me about how mistral winds have attracted super-orcas."},
]
tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
```
which will yield:
```
<|im_start|>system
You are MistralOrca, a large language model trained by Alignment Lab AI. Write out your reasoning step-by-step to be sure you get the right answers!
<|im_end|>
<|im_start|>user
How are you?<|im_end|>
<|im_start|>assistant
I am doing well!<|im_end|>
<|im_start|>user
Please tell me about how mistral winds have attracted super-orcas.<|im_end|>
<|im_start|>assistant
```
If you use `tokenize=True` and `return_tensors="pt"` instead, then you will get a tokenized
and formatted conversation ready to pass to `model.generate()`.
# Inference
See [this notebook](https://colab.research.google.com/drive/1yZlLSifCGELAX5GN582kZypHCv0uJuNX?usp=sharing) for inference details.
Note that you need the development snapshot of Transformers currently, as support for Mistral hasn't been released into PyPI yet:
```
pip install git+https://github.com/huggingface/transformers
```
|
cjsanjay/llama-3-8B-gorilla-meraki_v1 | cjsanjay | 2024-05-12T16:41:21Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-12T10:57:24Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Uploaded model
- **Developed by:** cjsanjay
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
nyclone569/tmp | nyclone569 | 2024-05-12T16:39:55Z | 163 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:VietAI/vit5-base-vietnews-summarization",
"base_model:finetune:VietAI/vit5-base-vietnews-summarization",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-05-12T16:39:23Z | ---
license: mit
base_model: VietAI/vit5-base-vietnews-summarization
tags:
- generated_from_trainer
model-index:
- name: tmp
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tmp
This model is a fine-tuned version of [VietAI/vit5-base-vietnews-summarization](https://huggingface.co/VietAI/vit5-base-vietnews-summarization) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.40.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
SHAMS-R/araT5-AraSQUAD-16000-steps-epoch-1-v1 | SHAMS-R | 2024-05-12T16:39:42Z | 107 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-05-12T16:37:57Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
DARUSHKA2161/distilbert-base-multilingual-cased-vk-posts | DARUSHKA2161 | 2024-05-12T16:39:06Z | 107 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-02T00:15:25Z | ---
license: apache-2.0
license_name: distilbert-base-multilingual-cased
--- |
bartowski/Pantheon-RP-1.0-8b-Llama-3-GGUF | bartowski | 2024-05-12T16:38:47Z | 321 | 7 | null | [
"gguf",
"Llama-3",
"instruct",
"finetune",
"chatml",
"axolotl",
"roleplay",
"text-generation",
"en",
"base_model:meta-llama/Meta-Llama-3-8B",
"base_model:quantized:meta-llama/Meta-Llama-3-8B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-05-12T11:35:10Z | ---
base_model: meta-llama/Meta-Llama-3-8B
tags:
- Llama-3
- instruct
- finetune
- chatml
- axolotl
- roleplay
license: apache-2.0
language:
- en
quantized_by: bartowski
pipeline_tag: text-generation
---
## Llamacpp imatrix Quantizations of Pantheon-RP-1.0-8b-Llama-3
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b2854">b2854</a> for quantization.
Original model: https://huggingface.co/Gryphe/Pantheon-RP-1.0-8b-Llama-3
All quants made using imatrix option with dataset provided by Kalomaze [here](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384)
## Prompt format
```
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Pantheon-RP-1.0-8b-Llama-3-Q8_0.gguf](https://huggingface.co/bartowski/Pantheon-RP-1.0-8b-Llama-3-GGUF/blob/main/Pantheon-RP-1.0-8b-Llama-3-Q8_0.gguf) | Q8_0 | 8.54GB | Extremely high quality, generally unneeded but max available quant. |
| [Pantheon-RP-1.0-8b-Llama-3-Q6_K.gguf](https://huggingface.co/bartowski/Pantheon-RP-1.0-8b-Llama-3-GGUF/blob/main/Pantheon-RP-1.0-8b-Llama-3-Q6_K.gguf) | Q6_K | 6.59GB | Very high quality, near perfect, *recommended*. |
| [Pantheon-RP-1.0-8b-Llama-3-Q5_K_M.gguf](https://huggingface.co/bartowski/Pantheon-RP-1.0-8b-Llama-3-GGUF/blob/main/Pantheon-RP-1.0-8b-Llama-3-Q5_K_M.gguf) | Q5_K_M | 5.73GB | High quality, *recommended*. |
| [Pantheon-RP-1.0-8b-Llama-3-Q5_K_S.gguf](https://huggingface.co/bartowski/Pantheon-RP-1.0-8b-Llama-3-GGUF/blob/main/Pantheon-RP-1.0-8b-Llama-3-Q5_K_S.gguf) | Q5_K_S | 5.59GB | High quality, *recommended*. |
| [Pantheon-RP-1.0-8b-Llama-3-Q4_K_M.gguf](https://huggingface.co/bartowski/Pantheon-RP-1.0-8b-Llama-3-GGUF/blob/main/Pantheon-RP-1.0-8b-Llama-3-Q4_K_M.gguf) | Q4_K_M | 4.92GB | Good quality, uses about 4.83 bits per weight, *recommended*. |
| [Pantheon-RP-1.0-8b-Llama-3-Q4_K_S.gguf](https://huggingface.co/bartowski/Pantheon-RP-1.0-8b-Llama-3-GGUF/blob/main/Pantheon-RP-1.0-8b-Llama-3-Q4_K_S.gguf) | Q4_K_S | 4.69GB | Slightly lower quality with more space savings, *recommended*. |
| [Pantheon-RP-1.0-8b-Llama-3-IQ4_NL.gguf](https://huggingface.co/bartowski/Pantheon-RP-1.0-8b-Llama-3-GGUF/blob/main/Pantheon-RP-1.0-8b-Llama-3-IQ4_NL.gguf) | IQ4_NL | 4.67GB | Decent quality, slightly smaller than Q4_K_S with similar performance *recommended*. |
| [Pantheon-RP-1.0-8b-Llama-3-IQ4_XS.gguf](https://huggingface.co/bartowski/Pantheon-RP-1.0-8b-Llama-3-GGUF/blob/main/Pantheon-RP-1.0-8b-Llama-3-IQ4_XS.gguf) | IQ4_XS | 4.44GB | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
| [Pantheon-RP-1.0-8b-Llama-3-Q3_K_L.gguf](https://huggingface.co/bartowski/Pantheon-RP-1.0-8b-Llama-3-GGUF/blob/main/Pantheon-RP-1.0-8b-Llama-3-Q3_K_L.gguf) | Q3_K_L | 4.32GB | Lower quality but usable, good for low RAM availability. |
| [Pantheon-RP-1.0-8b-Llama-3-Q3_K_M.gguf](https://huggingface.co/bartowski/Pantheon-RP-1.0-8b-Llama-3-GGUF/blob/main/Pantheon-RP-1.0-8b-Llama-3-Q3_K_M.gguf) | Q3_K_M | 4.01GB | Even lower quality. |
| [Pantheon-RP-1.0-8b-Llama-3-IQ3_M.gguf](https://huggingface.co/bartowski/Pantheon-RP-1.0-8b-Llama-3-GGUF/blob/main/Pantheon-RP-1.0-8b-Llama-3-IQ3_M.gguf) | IQ3_M | 3.78GB | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| [Pantheon-RP-1.0-8b-Llama-3-IQ3_S.gguf](https://huggingface.co/bartowski/Pantheon-RP-1.0-8b-Llama-3-GGUF/blob/main/Pantheon-RP-1.0-8b-Llama-3-IQ3_S.gguf) | IQ3_S | 3.68GB | Lower quality, new method with decent performance, recommended over Q3_K_S quant, same size with better performance. |
| [Pantheon-RP-1.0-8b-Llama-3-Q3_K_S.gguf](https://huggingface.co/bartowski/Pantheon-RP-1.0-8b-Llama-3-GGUF/blob/main/Pantheon-RP-1.0-8b-Llama-3-Q3_K_S.gguf) | Q3_K_S | 3.66GB | Low quality, not recommended. |
| [Pantheon-RP-1.0-8b-Llama-3-IQ3_XS.gguf](https://huggingface.co/bartowski/Pantheon-RP-1.0-8b-Llama-3-GGUF/blob/main/Pantheon-RP-1.0-8b-Llama-3-IQ3_XS.gguf) | IQ3_XS | 3.51GB | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| [Pantheon-RP-1.0-8b-Llama-3-IQ3_XXS.gguf](https://huggingface.co/bartowski/Pantheon-RP-1.0-8b-Llama-3-GGUF/blob/main/Pantheon-RP-1.0-8b-Llama-3-IQ3_XXS.gguf) | IQ3_XXS | 3.27GB | Lower quality, new method with decent performance, comparable to Q3 quants. |
| [Pantheon-RP-1.0-8b-Llama-3-Q2_K.gguf](https://huggingface.co/bartowski/Pantheon-RP-1.0-8b-Llama-3-GGUF/blob/main/Pantheon-RP-1.0-8b-Llama-3-Q2_K.gguf) | Q2_K | 3.17GB | Very low quality but surprisingly usable. |
| [Pantheon-RP-1.0-8b-Llama-3-IQ2_M.gguf](https://huggingface.co/bartowski/Pantheon-RP-1.0-8b-Llama-3-GGUF/blob/main/Pantheon-RP-1.0-8b-Llama-3-IQ2_M.gguf) | IQ2_M | 2.94GB | Very low quality, uses SOTA techniques to also be surprisingly usable. |
| [Pantheon-RP-1.0-8b-Llama-3-IQ2_S.gguf](https://huggingface.co/bartowski/Pantheon-RP-1.0-8b-Llama-3-GGUF/blob/main/Pantheon-RP-1.0-8b-Llama-3-IQ2_S.gguf) | IQ2_S | 2.75GB | Very low quality, uses SOTA techniques to be usable. |
| [Pantheon-RP-1.0-8b-Llama-3-IQ2_XS.gguf](https://huggingface.co/bartowski/Pantheon-RP-1.0-8b-Llama-3-GGUF/blob/main/Pantheon-RP-1.0-8b-Llama-3-IQ2_XS.gguf) | IQ2_XS | 2.60GB | Very low quality, uses SOTA techniques to be usable. |
| [Pantheon-RP-1.0-8b-Llama-3-IQ2_XXS.gguf](https://huggingface.co/bartowski/Pantheon-RP-1.0-8b-Llama-3-GGUF/blob/main/Pantheon-RP-1.0-8b-Llama-3-IQ2_XXS.gguf) | IQ2_XXS | 2.39GB | Lower quality, uses SOTA techniques to be usable. |
| [Pantheon-RP-1.0-8b-Llama-3-IQ1_M.gguf](https://huggingface.co/bartowski/Pantheon-RP-1.0-8b-Llama-3-GGUF/blob/main/Pantheon-RP-1.0-8b-Llama-3-IQ1_M.gguf) | IQ1_M | 2.16GB | Extremely low quality, *not* recommended. |
| [Pantheon-RP-1.0-8b-Llama-3-IQ1_S.gguf](https://huggingface.co/bartowski/Pantheon-RP-1.0-8b-Llama-3-GGUF/blob/main/Pantheon-RP-1.0-8b-Llama-3-IQ1_S.gguf) | IQ1_S | 2.01GB | Extremely low quality, *not* recommended. |
## Downloading using huggingface-cli
First, make sure you have hugginface-cli installed:
```
pip install -U "huggingface_hub[cli]"
```
Then, you can target the specific file you want:
```
huggingface-cli download bartowski/Pantheon-RP-1.0-8b-Llama-3-GGUF --include "Pantheon-RP-1.0-8b-Llama-3-Q4_K_M.gguf" --local-dir ./ --local-dir-use-symlinks False
```
If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
```
huggingface-cli download bartowski/Pantheon-RP-1.0-8b-Llama-3-GGUF --include "Pantheon-RP-1.0-8b-Llama-3-Q8_0.gguf/*" --local-dir Pantheon-RP-1.0-8b-Llama-3-Q8_0 --local-dir-use-symlinks False
```
You can either specify a new local-dir (Pantheon-RP-1.0-8b-Llama-3-Q8_0) or download them all in place (./)
## Which file should I choose?
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
[llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm.
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
abc88767/4sc25 | abc88767 | 2024-05-12T16:37:13Z | 124 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-12T16:35:47Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
AlekseyScorpi/llama_2_13b_vacancies_lora | AlekseyScorpi | 2024-05-12T16:34:45Z | 0 | 1 | null | [
"safetensors",
"code",
"text-generation",
"conversational",
"en",
"dataset:AlekseyScorpi/vacancies_prompts_en",
"license:llama2",
"region:us"
] | text-generation | 2024-04-26T08:23:12Z | ---
license: llama2
datasets:
- AlekseyScorpi/vacancies_prompts_en
language:
- en
pipeline_tag: text-generation
tags:
- code
---
### About this model
This model was finetuned with https://huggingface.co/datasets/AlekseyScorpi/vacancies_prompts_en. Original goal for this model is generating originals vacancies texts. Here you can download LoRA adapter and using with original model.
### Original model
Original model is https://huggingface.co/meta-llama/Llama-2-13b-chat-hf
### Prompt template
Here is a prompt template to correctly use the model
```
[INST] <<SYS>>
{system_prompt}
<</SYS>>
{message}[/INST]
``` |
NikolayKozloff/Yi-1.5-9B-Chat-Q4_K_M-GGUF | NikolayKozloff | 2024-05-12T16:30:31Z | 0 | 1 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-05-12T16:30:12Z | ---
license: apache-2.0
tags:
- llama-cpp
- gguf-my-repo
---
# NikolayKozloff/Yi-1.5-9B-Chat-Q4_K_M-GGUF
This model was converted to GGUF format from [`01-ai/Yi-1.5-9B-Chat`](https://huggingface.co/01-ai/Yi-1.5-9B-Chat) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/01-ai/Yi-1.5-9B-Chat) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo NikolayKozloff/Yi-1.5-9B-Chat-Q4_K_M-GGUF --model yi-1.5-9b-chat.Q4_K_M.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo NikolayKozloff/Yi-1.5-9B-Chat-Q4_K_M-GGUF --model yi-1.5-9b-chat.Q4_K_M.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m yi-1.5-9b-chat.Q4_K_M.gguf -n 128
```
|
fawadrashid/lora_model | fawadrashid | 2024-05-12T16:28:51Z | 0 | 0 | null | [
"safetensors",
"license:apache-2.0",
"region:us"
] | null | 2024-05-12T16:11:27Z | ---
license: apache-2.0
---
|
AlekseyScorpi/saiga_mistral_7b_vacancies_merged | AlekseyScorpi | 2024-05-12T16:27:04Z | 7 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"code",
"ru",
"dataset:AlekseyScorpi/vacancies_prompts",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-04-26T10:40:18Z | ---
license: apache-2.0
datasets:
- AlekseyScorpi/vacancies_prompts
language:
- ru
pipeline_tag: text-generation
tags:
- code
---
### About this model
This is a merged model of original model + LoRA adapter. You can find more information here: https://huggingface.co/AlekseyScorpi/saiga_mistral_7b_vacancies_lora. |
MachineTrofimov/whisper-tiny-ru | MachineTrofimov | 2024-05-12T16:25:17Z | 91 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"ru",
"dataset:bond005/podlodka_speech",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-05-10T19:03:02Z | ---
language:
- ru
license: apache-2.0
tags:
- generated_from_trainer
base_model: openai/whisper-tiny
datasets:
- bond005/podlodka_speech
metrics:
- wer
model-index:
- name: whisper-tiny-ru
results:
- task:
type: automatic-speech-recognition
name: Automatic Speech Recognition
dataset:
name: Podlodka Speech
type: bond005/podlodka_speech
args: 'config: ru, split: test'
metrics:
- type: wer
value: 83.72703412073491
name: Wer
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-tiny-ru
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the Podlodka Speech dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2475
- Wer: 83.7270
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 40
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-------:|:----:|:---------------:|:--------:|
| 0.9999 | 4.4444 | 40 | 1.2054 | 71.8285 |
| 0.4535 | 8.8889 | 80 | 1.1539 | 73.2283 |
| 0.2849 | 13.3333 | 120 | 1.1958 | 104.5494 |
| 0.1674 | 17.7778 | 160 | 1.2341 | 79.2651 |
| 0.1372 | 22.2222 | 200 | 1.2475 | 83.7270 |
### Framework versions
- Transformers 4.40.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
AlekseyScorpi/saiga_mistral_7b_vacancies_lora | AlekseyScorpi | 2024-05-12T16:23:27Z | 0 | 1 | null | [
"safetensors",
"code",
"text-generation",
"ru",
"dataset:AlekseyScorpi/vacancies_prompts",
"license:apache-2.0",
"region:us"
] | text-generation | 2024-04-23T16:11:19Z | ---
license: apache-2.0
datasets:
- AlekseyScorpi/vacancies_prompts
language:
- ru
pipeline_tag: text-generation
tags:
- code
---
### About this model
This model was finetuned with https://huggingface.co/datasets/AlekseyScorpi/vacancies_prompts. Original goal for this model is generating originals vacancies texts. Here you can download LoRA adapter and using with original model.
### Original model
Original model is https://huggingface.co/IlyaGusev/saiga_mistral_7b_merged
### Prompt template
Here is a prompt template to correctly use the model
```
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
``` |
abc88767/3sc24 | abc88767 | 2024-05-12T16:21:35Z | 124 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-12T16:20:05Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Jotschi/kleiner-astronaut-126M | Jotschi | 2024-05-12T16:16:27Z | 0 | 0 | null | [
"pytorch",
"de",
"dataset:Jotschi/kleiner-astronaut",
"license:apache-2.0",
"region:us"
] | null | 2024-04-30T17:57:21Z | ---
datasets:
- Jotschi/kleiner-astronaut
license: apache-2.0
language:
- de
---
# Kleiner Astronaut - 126M

Model pre-trained on the german [Kleiner Astronaut dataset](https://huggingface.co/datasets/Jotschi/kleiner-astronaut).
Example from Dataset:
> Eines Tages traten der kleine Astronaut Max und sein neues Roboter-Freundchen, Astro, in ihr schattiges Wohnzimmer…
* Architecture: NanoGPT / Custom Arch - Based on GPT-2
* License: Apache-2.0
## Training
Sources: https://github.com/Jotschi/nanoGPT/tree/kleiner-astronaut-126M
Hyperparams used to train this model:
| Parameter | Value |
|----------------|-------|
| n_layer | 12 |
| n_head | 12 |
| n_embd | 780 |
| dropout | 0.2 |
| batch_size | 32 |
| block_size | 256 |
| gradient_accumulation_steps | 1 |
| learning_rate | 1e-5 |
| max_iters | 500000 |
| lr_decay_iters | 5000 |
| min_lr | 1e-6 |
| beta2 | 0.99 |
| warmup_iters | 100 |
## Example
```
Die Galaxie war voller ungewöhnlicher Dinge, die man noch nie gesehen hatte – riesige Sterne, die wie kleine Diamanten am Himmel glitzerten, und Sternschnuppen, die wie feine Seifenblasen funkelten. Max und Luna waren so erstaunt, dass sie sich auf einem großen Festmahl herumdrehten, begegneten ihnen über einer Reihe von Galaxien, wo jedes Sternchen einzigartige Geschichten hörte.
Als sie zu einem großen Planet umschwommen waren, bemerkten sie einen kleinen Planet, der vor ihnen aufgegangen war. "Wow", murmelte Max, "das ist so großartig!" Er hatte eine kleine Stadt in Form von Sternen darauf und war begeistert von den schönsten Welten.
Auf dieser neuen Welt trafen sie auf viele andere kleine Astronauten, die ebenfalls dort lebten. Diese Kinder waren so freundlich wie Max, aber sie trugen mit ihren Hüften und schauten ihnen zu. "Wir sind die Sternengötter", sagte einer der kleinen Sternchen, "und wir sind auf unserer Reise durch den Kosmos."
Max lachte und dachte nach, wie er diese neue Welt entdecken würde. Mit einem Sprung in seine kleine Raumkapsel setzte Max los und fuhr durch die Sternenschleife, um nach dem Abenteuer zu suchen. Er kletterte über Flüssigkeiten, die wie flüssige Stücke aussahen und sah wie kleine Sterne auf einer großen Festfeier tanzten.
```
## Usage
The custom model implementation can be found in `model.py`.
```py
pip install -r requirements.txt
python example.py
```
The model is not compatible with common inference servers. |
psuplj/Meta-Llama-3-8B-Q4_K_M-GGUF | psuplj | 2024-05-12T16:07:52Z | 21 | 0 | null | [
"gguf",
"facebook",
"meta",
"pytorch",
"llama",
"llama-3",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"license:other",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-12T16:07:35Z | ---
language:
- en
license: other
tags:
- facebook
- meta
- pytorch
- llama
- llama-3
- llama-cpp
- gguf-my-repo
pipeline_tag: text-generation
license_name: llama3
license_link: LICENSE
extra_gated_prompt: "### META LLAMA 3 COMMUNITY LICENSE AGREEMENT\nMeta Llama 3 Version\
\ Release Date: April 18, 2024\n\"Agreement\" means the terms and conditions for\
\ use, reproduction, distribution and modification of the Llama Materials set forth\
\ herein.\n\"Documentation\" means the specifications, manuals and documentation\
\ accompanying Meta Llama 3 distributed by Meta at https://llama.meta.com/get-started/.\n\
\"Licensee\" or \"you\" means you, or your employer or any other person or entity\
\ (if you are entering into this Agreement on such person or entity’s behalf), of\
\ the age required under applicable laws, rules or regulations to provide legal\
\ consent and that has legal authority to bind your employer or such other person\
\ or entity if you are entering in this Agreement on their behalf.\n\"Meta Llama\
\ 3\" means the foundational large language models and software and algorithms,\
\ including machine-learning model code, trained model weights, inference-enabling\
\ code, training-enabling code, fine-tuning enabling code and other elements of\
\ the foregoing distributed by Meta at https://llama.meta.com/llama-downloads.\n\
\"Llama Materials\" means, collectively, Meta’s proprietary Meta Llama 3 and Documentation\
\ (and any portion thereof) made available under this Agreement.\n\"Meta\" or \"\
we\" means Meta Platforms Ireland Limited (if you are located in or, if you are\
\ an entity, your principal place of business is in the EEA or Switzerland) and\
\ Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland).\n\
\ \n1. License Rights and Redistribution.\na. Grant of Rights. You are granted\
\ a non-exclusive, worldwide, non-transferable and royalty-free limited license\
\ under Meta’s intellectual property or other rights owned by Meta embodied in the\
\ Llama Materials to use, reproduce, distribute, copy, create derivative works of,\
\ and make modifications to the Llama Materials.\nb. Redistribution and Use.\ni.\
\ If you distribute or make available the Llama Materials (or any derivative works\
\ thereof), or a product or service that uses any of them, including another AI\
\ model, you shall (A) provide a copy of this Agreement with any such Llama Materials;\
\ and (B) prominently display “Built with Meta Llama 3” on a related website, user\
\ interface, blogpost, about page, or product documentation. If you use the Llama\
\ Materials to create, train, fine tune, or otherwise improve an AI model, which\
\ is distributed or made available, you shall also include “Llama 3” at the beginning\
\ of any such AI model name.\nii. If you receive Llama Materials, or any derivative\
\ works thereof, from a Licensee as part of an integrated end user product, then\
\ Section 2 of this Agreement will not apply to you.\niii. You must retain in all\
\ copies of the Llama Materials that you distribute the following attribution notice\
\ within a “Notice” text file distributed as a part of such copies: “Meta Llama\
\ 3 is licensed under the Meta Llama 3 Community License, Copyright © Meta Platforms,\
\ Inc. All Rights Reserved.”\niv. Your use of the Llama Materials must comply with\
\ applicable laws and regulations (including trade compliance laws and regulations)\
\ and adhere to the Acceptable Use Policy for the Llama Materials (available at\
\ https://llama.meta.com/llama3/use-policy), which is hereby incorporated by reference\
\ into this Agreement.\nv. You will not use the Llama Materials or any output or\
\ results of the Llama Materials to improve any other large language model (excluding\
\ Meta Llama 3 or derivative works thereof).\n2. Additional Commercial Terms. If,\
\ on the Meta Llama 3 version release date, the monthly active users of the products\
\ or services made available by or for Licensee, or Licensee’s affiliates, is greater\
\ than 700 million monthly active users in the preceding calendar month, you must\
\ request a license from Meta, which Meta may grant to you in its sole discretion,\
\ and you are not authorized to exercise any of the rights under this Agreement\
\ unless or until Meta otherwise expressly grants you such rights.\n3. Disclaimer\
\ of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT\
\ AND RESULTS THEREFROM ARE PROVIDED ON AN “AS IS” BASIS, WITHOUT WARRANTIES OF\
\ ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED,\
\ INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY,\
\ OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING\
\ THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME\
\ ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS.\n\
4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER\
\ ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY,\
\ OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT,\
\ SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META\
\ OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING.\n\
5. Intellectual Property.\na. No trademark licenses are granted under this Agreement,\
\ and in connection with the Llama Materials, neither Meta nor Licensee may use\
\ any name or mark owned by or associated with the other or any of its affiliates,\
\ except as required for reasonable and customary use in describing and redistributing\
\ the Llama Materials or as set forth in this Section 5(a). Meta hereby grants you\
\ a license to use “Llama 3” (the “Mark”) solely as required to comply with the\
\ last sentence of Section 1.b.i. You will comply with Meta’s brand guidelines (currently\
\ accessible at https://about.meta.com/brand/resources/meta/company-brand/ ). All\
\ goodwill arising out of your use of the Mark will inure to the benefit of Meta.\n\
b. Subject to Meta’s ownership of Llama Materials and derivatives made by or for\
\ Meta, with respect to any derivative works and modifications of the Llama Materials\
\ that are made by you, as between you and Meta, you are and will be the owner of\
\ such derivative works and modifications.\nc. If you institute litigation or other\
\ proceedings against Meta or any entity (including a cross-claim or counterclaim\
\ in a lawsuit) alleging that the Llama Materials or Meta Llama 3 outputs or results,\
\ or any portion of any of the foregoing, constitutes infringement of intellectual\
\ property or other rights owned or licensable by you, then any licenses granted\
\ to you under this Agreement shall terminate as of the date such litigation or\
\ claim is filed or instituted. You will indemnify and hold harmless Meta from and\
\ against any claim by any third party arising out of or related to your use or\
\ distribution of the Llama Materials.\n6. Term and Termination. The term of this\
\ Agreement will commence upon your acceptance of this Agreement or access to the\
\ Llama Materials and will continue in full force and effect until terminated in\
\ accordance with the terms and conditions herein. Meta may terminate this Agreement\
\ if you are in breach of any term or condition of this Agreement. Upon termination\
\ of this Agreement, you shall delete and cease use of the Llama Materials. Sections\
\ 3, 4 and 7 shall survive the termination of this Agreement.\n7. Governing Law\
\ and Jurisdiction. This Agreement will be governed and construed under the laws\
\ of the State of California without regard to choice of law principles, and the\
\ UN Convention on Contracts for the International Sale of Goods does not apply\
\ to this Agreement. The courts of California shall have exclusive jurisdiction\
\ of any dispute arising out of this Agreement.\n### Meta Llama 3 Acceptable Use\
\ Policy\nMeta is committed to promoting safe and fair use of its tools and features,\
\ including Meta Llama 3. If you access or use Meta Llama 3, you agree to this Acceptable\
\ Use Policy (“Policy”). The most recent copy of this policy can be found at [https://llama.meta.com/llama3/use-policy](https://llama.meta.com/llama3/use-policy)\n\
#### Prohibited Uses\nWe want everyone to use Meta Llama 3 safely and responsibly.\
\ You agree you will not use, or allow others to use, Meta Llama 3 to: 1. Violate\
\ the law or others’ rights, including to:\n 1. Engage in, promote, generate,\
\ contribute to, encourage, plan, incite, or further illegal or unlawful activity\
\ or content, such as:\n 1. Violence or terrorism\n 2. Exploitation\
\ or harm to children, including the solicitation, creation, acquisition, or dissemination\
\ of child exploitative content or failure to report Child Sexual Abuse Material\n\
\ 3. Human trafficking, exploitation, and sexual violence\n 4. The\
\ illegal distribution of information or materials to minors, including obscene\
\ materials, or failure to employ legally required age-gating in connection with\
\ such information or materials.\n 5. Sexual solicitation\n 6. Any\
\ other criminal activity\n 2. Engage in, promote, incite, or facilitate the\
\ harassment, abuse, threatening, or bullying of individuals or groups of individuals\n\
\ 3. Engage in, promote, incite, or facilitate discrimination or other unlawful\
\ or harmful conduct in the provision of employment, employment benefits, credit,\
\ housing, other economic benefits, or other essential goods and services\n 4.\
\ Engage in the unauthorized or unlicensed practice of any profession including,\
\ but not limited to, financial, legal, medical/health, or related professional\
\ practices\n 5. Collect, process, disclose, generate, or infer health, demographic,\
\ or other sensitive personal or private information about individuals without rights\
\ and consents required by applicable laws\n 6. Engage in or facilitate any action\
\ or generate any content that infringes, misappropriates, or otherwise violates\
\ any third-party rights, including the outputs or results of any products or services\
\ using the Llama Materials\n 7. Create, generate, or facilitate the creation\
\ of malicious code, malware, computer viruses or do anything else that could disable,\
\ overburden, interfere with or impair the proper working, integrity, operation\
\ or appearance of a website or computer system\n2. Engage in, promote, incite,\
\ facilitate, or assist in the planning or development of activities that present\
\ a risk of death or bodily harm to individuals, including use of Meta Llama 3 related\
\ to the following:\n 1. Military, warfare, nuclear industries or applications,\
\ espionage, use for materials or activities that are subject to the International\
\ Traffic Arms Regulations (ITAR) maintained by the United States Department of\
\ State\n 2. Guns and illegal weapons (including weapon development)\n 3.\
\ Illegal drugs and regulated/controlled substances\n 4. Operation of critical\
\ infrastructure, transportation technologies, or heavy machinery\n 5. Self-harm\
\ or harm to others, including suicide, cutting, and eating disorders\n 6. Any\
\ content intended to incite or promote violence, abuse, or any infliction of bodily\
\ harm to an individual\n3. Intentionally deceive or mislead others, including use\
\ of Meta Llama 3 related to the following:\n 1. Generating, promoting, or furthering\
\ fraud or the creation or promotion of disinformation\n 2. Generating, promoting,\
\ or furthering defamatory content, including the creation of defamatory statements,\
\ images, or other content\n 3. Generating, promoting, or further distributing\
\ spam\n 4. Impersonating another individual without consent, authorization,\
\ or legal right\n 5. Representing that the use of Meta Llama 3 or outputs are\
\ human-generated\n 6. Generating or facilitating false online engagement, including\
\ fake reviews and other means of fake online engagement\n4. Fail to appropriately\
\ disclose to end users any known dangers of your AI system\nPlease report any violation\
\ of this Policy, software “bug,” or other problems that could lead to a violation\
\ of this Policy through one of the following means:\n * Reporting issues with\
\ the model: [https://github.com/meta-llama/llama3](https://github.com/meta-llama/llama3)\n\
\ * Reporting risky content generated by the model:\n developers.facebook.com/llama_output_feedback\n\
\ * Reporting bugs and security concerns: facebook.com/whitehat/info\n * Reporting\
\ violations of the Acceptable Use Policy or unlicensed uses of Meta Llama 3: [email protected]"
extra_gated_fields:
First Name: text
Last Name: text
Date of birth: date_picker
Country: country
Affiliation: text
geo: ip_location
? By clicking Submit below I accept the terms of the license and acknowledge that
the information I provide will be collected stored processed and shared in accordance
with the Meta Privacy Policy
: checkbox
extra_gated_description: The information you provide will be collected, stored, processed
and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/).
extra_gated_button_content: Submit
---
# psuplj/Meta-Llama-3-8B-Q4_K_M-GGUF
This model was converted to GGUF format from [`meta-llama/Meta-Llama-3-8B`](https://huggingface.co/meta-llama/Meta-Llama-3-8B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/meta-llama/Meta-Llama-3-8B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo psuplj/Meta-Llama-3-8B-Q4_K_M-GGUF --model meta-llama-3-8b.Q4_K_M.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo psuplj/Meta-Llama-3-8B-Q4_K_M-GGUF --model meta-llama-3-8b.Q4_K_M.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m meta-llama-3-8b.Q4_K_M.gguf -n 128
```
|
NikolayKozloff/Yi-1.5-6B-Chat-Q4_K_M-GGUF | NikolayKozloff | 2024-05-12T16:05:25Z | 0 | 3 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-05-12T16:05:16Z | ---
license: apache-2.0
tags:
- llama-cpp
- gguf-my-repo
---
# NikolayKozloff/Yi-1.5-6B-Chat-Q4_K_M-GGUF
This model was converted to GGUF format from [`01-ai/Yi-1.5-6B-Chat`](https://huggingface.co/01-ai/Yi-1.5-6B-Chat) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/01-ai/Yi-1.5-6B-Chat) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo NikolayKozloff/Yi-1.5-6B-Chat-Q4_K_M-GGUF --model yi-1.5-6b-chat.Q4_K_M.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo NikolayKozloff/Yi-1.5-6B-Chat-Q4_K_M-GGUF --model yi-1.5-6b-chat.Q4_K_M.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m yi-1.5-6b-chat.Q4_K_M.gguf -n 128
```
|
ByteBrew23/Yi-1.5-9B-Chat-Q5_K_M-GGUF | ByteBrew23 | 2024-05-12T16:01:19Z | 1 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-05-12T16:00:58Z | ---
license: apache-2.0
tags:
- llama-cpp
- gguf-my-repo
---
# ByteBrew23/Yi-1.5-9B-Chat-Q5_K_M-GGUF
This model was converted to GGUF format from [`01-ai/Yi-1.5-9B-Chat`](https://huggingface.co/01-ai/Yi-1.5-9B-Chat) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/01-ai/Yi-1.5-9B-Chat) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo ByteBrew23/Yi-1.5-9B-Chat-Q5_K_M-GGUF --model yi-1.5-9b-chat.Q5_K_M.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo ByteBrew23/Yi-1.5-9B-Chat-Q5_K_M-GGUF --model yi-1.5-9b-chat.Q5_K_M.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m yi-1.5-9b-chat.Q5_K_M.gguf -n 128
```
|
sam1771/mistral-finetuned-alpaca | sam1771 | 2024-05-12T15:58:53Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:TheBloke/Mistral-7B-Instruct-v0.1-GPTQ",
"base_model:adapter:TheBloke/Mistral-7B-Instruct-v0.1-GPTQ",
"license:apache-2.0",
"region:us"
] | null | 2024-05-12T14:56:34Z | ---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: TheBloke/Mistral-7B-Instruct-v0.1-GPTQ
model-index:
- name: mistral-finetuned-alpaca
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral-finetuned-alpaca
This model is a fine-tuned version of [TheBloke/Mistral-7B-Instruct-v0.1-GPTQ](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GPTQ) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 250
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.41.0.dev0
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1 |
kyuhyun/final_model2 | kyuhyun | 2024-05-12T15:58:21Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-12T15:57:54Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
NikolayKozloff/Yi-1.5-9B-Chat-Q6_K-GGUF | NikolayKozloff | 2024-05-12T15:55:38Z | 2 | 2 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-05-12T15:55:20Z | ---
license: apache-2.0
tags:
- llama-cpp
- gguf-my-repo
---
# NikolayKozloff/Yi-1.5-9B-Chat-Q6_K-GGUF
This model was converted to GGUF format from [`01-ai/Yi-1.5-9B-Chat`](https://huggingface.co/01-ai/Yi-1.5-9B-Chat) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/01-ai/Yi-1.5-9B-Chat) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo NikolayKozloff/Yi-1.5-9B-Chat-Q6_K-GGUF --model yi-1.5-9b-chat.Q6_K.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo NikolayKozloff/Yi-1.5-9B-Chat-Q6_K-GGUF --model yi-1.5-9b-chat.Q6_K.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m yi-1.5-9b-chat.Q6_K.gguf -n 128
```
|
miugod/captcha_v1 | miugod | 2024-05-12T15:55:17Z | 0 | 1 | null | [
"region:us"
] | null | 2024-05-12T15:52:28Z | 点选式验证码处理模型
https://github.com/MgArcher/Text_select_captcha
|
thrunlab/sparse_llama_7b_hf2_refined_web_50p_2024-05-12 | thrunlab | 2024-05-12T15:54:42Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"sparse_llama",
"text-generation",
"generated_from_trainer",
"custom_code",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:finetune:meta-llama/Llama-2-7b-hf",
"license:llama2",
"autotrain_compatible",
"region:us"
] | text-generation | 2024-05-12T15:32:59Z | ---
license: llama2
base_model: meta-llama/Llama-2-7b-hf
tags:
- generated_from_trainer
model-index:
- name: sparse_llama_7b_hf2_refined_web_50p_2024-05-12
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sparse_llama_7b_hf2_refined_web_50p_2024-05-12
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3152
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 4
- seed: 0
- distributed_type: multi-GPU
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10
### Training results
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.19.1
- Tokenizers 0.15.2
|
Mag0g/Ezekiel26_2 | Mag0g | 2024-05-12T15:53:07Z | 124 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-12T15:52:01Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
presencesw/xlm-roberta-large-snli_entailment-triplet | presencesw | 2024-05-12T15:45:18Z | 161 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-12T15:43:40Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mlnha/vivit-b-16x2-kinetics400-finetuned-original | mlnha | 2024-05-12T15:41:57Z | 65 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vivit",
"video-classification",
"generated_from_trainer",
"base_model:google/vivit-b-16x2-kinetics400",
"base_model:finetune:google/vivit-b-16x2-kinetics400",
"license:mit",
"endpoints_compatible",
"region:us"
] | video-classification | 2024-05-12T11:55:24Z | ---
license: mit
base_model: google/vivit-b-16x2-kinetics400
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: vivit-b-16x2-kinetics400-finetuned-0512-original
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vivit-b-16x2-kinetics400-finetuned-0512-original
This model is a fine-tuned version of [google/vivit-b-16x2-kinetics400](https://huggingface.co/google/vivit-b-16x2-kinetics400) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9100
- Accuracy: 0.82
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 1500
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.5879 | 0.1 | 150 | 1.4047 | 0.54 |
| 0.5048 | 1.1 | 300 | 0.7097 | 0.78 |
| 0.1226 | 2.1 | 450 | 0.7969 | 0.8 |
| 0.004 | 3.1 | 600 | 0.9451 | 0.77 |
| 0.0027 | 4.1 | 750 | 0.7712 | 0.8 |
| 0.0011 | 5.1 | 900 | 0.9800 | 0.8 |
| 0.0155 | 6.1 | 1050 | 0.9160 | 0.8 |
| 0.001 | 7.1 | 1200 | 0.9038 | 0.82 |
| 0.0008 | 8.1 | 1350 | 0.8959 | 0.82 |
| 0.2194 | 9.1 | 1500 | 0.9100 | 0.82 |
### Framework versions
- Transformers 4.40.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
camilomj/weeknddawmfmera | camilomj | 2024-05-12T15:40:20Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2024-05-12T15:38:45Z | ---
license: apache-2.0
---
|
SHAMS-R/araT5-AraSQUAD-12000-steps-epoch-1-v1 | SHAMS-R | 2024-05-12T15:39:01Z | 107 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-05-12T15:36:55Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
zizun/Llama-3-Open-Ko-8B-inst-lora | zizun | 2024-05-12T15:30:58Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"license:llama3",
"endpoints_compatible",
"region:us"
] | null | 2024-05-12T15:27:12Z | ---
library_name: transformers
license: llama3
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Grizzlygg/segformer-b0-scene-parse-150 | Grizzlygg | 2024-05-12T15:29:39Z | 168 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"segformer",
"vision",
"image-segmentation",
"generated_from_trainer",
"license:other",
"endpoints_compatible",
"region:us"
] | image-segmentation | 2024-05-12T14:22:56Z | ---
license: other
tags:
- vision
- image-segmentation
- generated_from_trainer
model-index:
- name: segformer-b0-scene-parse-150
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-b0-scene-parse-150
This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the Grizzlygg/CrownTest dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7268
- Mean Iou: 0.3888
- Mean Accuracy: 0.7776
- Overall Accuracy: 0.7776
- Accuracy Unlabeled: nan
- Accuracy Crown: 0.7776
- Iou Unlabeled: 0.0
- Iou Crown: 0.7776
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Accuracy Unlabeled | Accuracy Crown | Iou Unlabeled | Iou Crown |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:------------------:|:--------------:|:-------------:|:---------:|
| 0.3303 | 10.0 | 20 | 0.6046 | 0.3722 | 0.7444 | 0.7444 | nan | 0.7444 | 0.0 | 0.7444 |
| 0.2269 | 20.0 | 40 | 0.6817 | 0.3613 | 0.7226 | 0.7226 | nan | 0.7226 | 0.0 | 0.7226 |
| 0.1893 | 30.0 | 60 | 0.7231 | 0.3717 | 0.7435 | 0.7435 | nan | 0.7435 | 0.0 | 0.7435 |
| 0.185 | 40.0 | 80 | 0.7688 | 0.3880 | 0.7760 | 0.7760 | nan | 0.7760 | 0.0 | 0.7760 |
| 0.1704 | 50.0 | 100 | 0.7268 | 0.3888 | 0.7776 | 0.7776 | nan | 0.7776 | 0.0 | 0.7776 |
### Framework versions
- Transformers 4.30.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.13.3
|
Holarissun/RM-TLDR_gpt3_loraR64_-1_gemma2b_lr5e-06_bs2_g4 | Holarissun | 2024-05-12T15:28:10Z | 0 | 0 | peft | [
"peft",
"safetensors",
"trl",
"reward-trainer",
"generated_from_trainer",
"base_model:google/gemma-2b",
"base_model:adapter:google/gemma-2b",
"license:gemma",
"region:us"
] | null | 2024-05-12T15:28:01Z | ---
license: gemma
library_name: peft
tags:
- trl
- reward-trainer
- generated_from_trainer
metrics:
- accuracy
base_model: google/gemma-2b
model-index:
- name: RM-TLDR_gpt3_loraR64_-1_gemma2b_lr5e-06_bs2_g4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# RM-TLDR_gpt3_loraR64_-1_gemma2b_lr5e-06_bs2_g4
This model is a fine-tuned version of [google/gemma-2b](https://huggingface.co/google/gemma-2b) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8450
- Accuracy: 0.5139
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0031 | 1.0 | 1447 | 0.8334 | 0.4943 |
| 0.0014 | 2.0 | 2894 | 0.8450 | 0.5139 |
### Framework versions
- PEFT 0.10.0
- Transformers 4.38.2
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2 |
TirthankarSlg/llama3-hypogen-new | TirthankarSlg | 2024-05-12T15:28:01Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-12T13:35:20Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ByteBrew23/mergekit-ties-amgixtr | ByteBrew23 | 2024-05-12T15:26:27Z | 3 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"arxiv:2306.01708",
"base_model:deepseek-ai/deepseek-coder-6.7b-instruct",
"base_model:merge:deepseek-ai/deepseek-coder-6.7b-instruct",
"base_model:dreamgen/WizardLM-2-7B",
"base_model:merge:dreamgen/WizardLM-2-7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-12T15:22:31Z | ---
base_model:
- dreamgen/WizardLM-2-7B
- deepseek-ai/deepseek-coder-6.7b-instruct
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [dreamgen/WizardLM-2-7B](https://huggingface.co/dreamgen/WizardLM-2-7B) as a base.
### Models Merged
The following models were included in the merge:
* [deepseek-ai/deepseek-coder-6.7b-instruct](https://huggingface.co/deepseek-ai/deepseek-coder-6.7b-instruct)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: dreamgen/WizardLM-2-7B
#no parameters necessary for base model
- model: deepseek-ai/deepseek-coder-6.7b-instruct
parameters:
density: 0.5
weight: 0.5
merge_method: ties
base_model: dreamgen/WizardLM-2-7B
parameters:
normalize: false
int8_mask: true
dtype: float16
```
|
moetezsa/second_train_2NDTRY | moetezsa | 2024-05-12T15:25:36Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"unsloth",
"generated_from_trainer",
"base_model:unsloth/mistral-7b-bnb-4bit",
"base_model:adapter:unsloth/mistral-7b-bnb-4bit",
"license:apache-2.0",
"region:us"
] | null | 2024-05-12T13:29:35Z | ---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- unsloth
- generated_from_trainer
base_model: unsloth/mistral-7b-bnb-4bit
model-index:
- name: second_train_2NDTRY
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# second_train_2NDTRY
This model is a fine-tuned version of [unsloth/mistral-7b-bnb-4bit](https://huggingface.co/unsloth/mistral-7b-bnb-4bit) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 3407
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 5
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.39.3
- Pytorch 2.2.2+cu121
- Datasets 2.16.0
- Tokenizers 0.15.2 |
sxg2520/mistral-finetuned-samsum | sxg2520 | 2024-05-12T15:24:13Z | 1 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:TheBloke/Mistral-7B-Instruct-v0.1-GPTQ",
"base_model:adapter:TheBloke/Mistral-7B-Instruct-v0.1-GPTQ",
"license:apache-2.0",
"region:us"
] | null | 2024-05-12T14:09:10Z | ---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: TheBloke/Mistral-7B-Instruct-v0.1-GPTQ
model-index:
- name: mistral-finetuned-samsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral-finetuned-samsum
This model is a fine-tuned version of [TheBloke/Mistral-7B-Instruct-v0.1-GPTQ](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GPTQ) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 250
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.41.0.dev0
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1 |
YorkieOH10/Yi-1.5-6B-Chat-Q8_0-GGUF | YorkieOH10 | 2024-05-12T15:23:57Z | 0 | 1 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-05-12T15:23:41Z | ---
license: apache-2.0
tags:
- llama-cpp
- gguf-my-repo
---
# YorkieOH10/Yi-1.5-6B-Chat-Q8_0-GGUF
This model was converted to GGUF format from [`01-ai/Yi-1.5-6B-Chat`](https://huggingface.co/01-ai/Yi-1.5-6B-Chat) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/01-ai/Yi-1.5-6B-Chat) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo YorkieOH10/Yi-1.5-6B-Chat-Q8_0-GGUF --model yi-1.5-6b-chat.Q8_0.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo YorkieOH10/Yi-1.5-6B-Chat-Q8_0-GGUF --model yi-1.5-6b-chat.Q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m yi-1.5-6b-chat.Q8_0.gguf -n 128
```
|
Grigorij/TindeBielik | Grigorij | 2024-05-12T15:19:30Z | 77 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"base_model:speakleash/Bielik-7B-Instruct-v0.1",
"base_model:quantized:speakleash/Bielik-7B-Instruct-v0.1",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-05-11T12:23:52Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
- sft
base_model: speakleash/Bielik-7B-Instruct-v0.1
---
# Uploaded model
- **Developed by:** Grigorij
- **License:** apache-2.0
- **Finetuned from model :** speakleash/Bielik-7B-Instruct-v0.1
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
thorirhrafn/GPT1B_domar_RLHF_rb185_e1 | thorirhrafn | 2024-05-12T15:17:37Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-12T15:17:32Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Yuki20/unsloth_Alpaca_8b_full | Yuki20 | 2024-05-12T15:17:26Z | 3 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-12T15:14:01Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Uploaded model
- **Developed by:** Yuki20
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
glazzova/body_type | glazzova | 2024-05-12T15:10:59Z | 296 | 5 | transformers | [
"transformers",
"pytorch",
"resnet",
"image-classification",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2024-05-12T13:20:58Z | ---
license: apache-2.0
library_name: transformers
pipeline_tag: image-classification
---
# body_type
Эта модель дообучена на [microsoft/resnet-50](https://huggingface.co/microsoft/resnet-50) с помощью
датасета, который содержит фотографии мужчин разных телосложений. Модель может определяет являетесь ли вы качком, скуфом, дрищом
или просто нормальным.
### Запуск модели
```python
import torch
from PIL import Image
from transformers import ResNetForImageClassification, AutoImageProcessor
processor = AutoImageProcessor.from_pretrained("glazzova/body_type")
model = ResNetForImageClassification.from_pretrained('glazzova/body_type')
image = Image.open('your_pic.jpeg')
inputs = processor(image, return_tensors="pt")
with torch.no_grad():
logits = model(**inputs).logits
# model predicts one of the 4 classes
predicted_label = logits.argmax(-1).item()
print(model.config.id2label[predicted_label]) |
abc88767/2c24 | abc88767 | 2024-05-12T15:09:04Z | 123 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-12T15:07:33Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
DUAL-GPO/phi-2-gpo-v7-i1 | DUAL-GPO | 2024-05-12T15:06:04Z | 1 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"phi",
"alignment-handbook",
"generated_from_trainer",
"trl",
"dpo",
"custom_code",
"dataset:HuggingFaceH4/ultrafeedback_binarized",
"base_model:DUAL-GPO/phi-2-gpo-new-i0",
"base_model:adapter:DUAL-GPO/phi-2-gpo-new-i0",
"license:apache-2.0",
"region:us"
] | null | 2024-05-12T11:57:25Z | ---
license: apache-2.0
library_name: peft
tags:
- alignment-handbook
- generated_from_trainer
- trl
- dpo
- generated_from_trainer
base_model: DUAL-GPO/phi-2-gpo-new-i0
datasets:
- HuggingFaceH4/ultrafeedback_binarized
model-index:
- name: phi-2-gpo-v7-i1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# phi-2-gpo-v7-i1
This model is a fine-tuned version of [DUAL-GPO/phi-2-gpo-new-i0](https://huggingface.co/DUAL-GPO/phi-2-gpo-new-i0) on the HuggingFaceH4/ultrafeedback_binarized dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- total_eval_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.2 |
Mr-Bhaskar/fbt-llama3-8b | Mr-Bhaskar | 2024-05-12T15:03:38Z | 205 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"unsloth",
"arxiv:1910.09700",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-10T17:47:22Z | ---
library_name: transformers
tags:
- unsloth
license: other
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
simmo/legal-llama-3-inst | simmo | 2024-05-12T15:01:04Z | 14 | 2 | transformers | [
"transformers",
"gguf",
"llama",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-05-12T14:30:47Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ArshiaKarimian/ner_model_onlyy | ArshiaKarimian | 2024-05-12T14:57:25Z | 105 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"token-classification",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-05-12T14:56:02Z | ---
license: apache-2.0
---
|
nypgd/distilbert_finetunedmodel | nypgd | 2024-05-12T14:53:48Z | 161 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"question-answering",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | question-answering | 2024-05-12T14:53:13Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Camilovelez1/prediction_sentiment | Camilovelez1 | 2024-05-12T14:51:23Z | 12 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-10T20:18:05Z | ---
license: apache-2.0
base_model: distilbert/distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: prediction_sentiment
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# prediction_sentiment
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5208
- Accuracy: 0.8675
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 125 | 0.3697 | 0.8631 |
| No log | 2.0 | 250 | 0.3521 | 0.8772 |
| No log | 3.0 | 375 | 0.3520 | 0.8724 |
| 0.3106 | 4.0 | 500 | 0.3965 | 0.8759 |
| 0.3106 | 5.0 | 625 | 0.4327 | 0.8718 |
| 0.3106 | 6.0 | 750 | 0.4660 | 0.8632 |
| 0.3106 | 7.0 | 875 | 0.4825 | 0.8734 |
| 0.1039 | 8.0 | 1000 | 0.5026 | 0.8664 |
| 0.1039 | 9.0 | 1125 | 0.5183 | 0.8669 |
| 0.1039 | 10.0 | 1250 | 0.5208 | 0.8675 |
### Framework versions
- Transformers 4.40.2
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
jsmidt/sd-class-butterflies-32 | jsmidt | 2024-05-12T14:50:42Z | 48 | 0 | diffusers | [
"diffusers",
"safetensors",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] | unconditional-image-generation | 2024-05-12T14:41:43Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('jsmidt/sd-class-butterflies-32')
image = pipeline().images[0]
image |
domenicrosati/additional_models_Qwen_Qwen1.5-0.5B-Chat_minimality-mmd_defence_steps_10000 | domenicrosati | 2024-05-12T14:48:52Z | 193 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-12T14:48:33Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
forumsystems/QS-Sentry-FT-Meta-Llama-3-8B-Instruct | forumsystems | 2024-05-12T14:41:50Z | 11 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:llama3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-07T18:28:54Z | ---
license: llama3
---
# Introduction
In the rapidly evolving landscape of AI, enterprises are increasingly turning to Small Language Models (SLMs)—like Mistral 7B and Llama3 8B—for tailored applications across various domains. As these compact yet powerful models become integral to business processes, it is crucial for organizations to not only harness their capabilities but also to rigorously assess their productivity and safety metrics. In our [previous article](https://quantumgears.com/2024/04/24/framework-for-llm-selection-balancing-risk-with-productivity/)
, we conducted a detailed evaluation of two common SLMs, Mistral 7B and Llama3 8B, focusing on their out-of-the-box results without specific fine-tuning for safety measures. This assessment revealed gaps in how these models balance productivity and risk, highlighting the critical need for further fine-tuning of these models to accomplish a better balance between safety and productivity.
Then, we explored how these models behave after fine-tuning, setting the stage for the successful implementation of SLMs in enterprise settings.
# Procedure
Our fine-tuning process leveraged the Supervised Fine-tuning Trainer library to perform the fine-tuning. To optimize the performance of the fine-tuned model, we experimented with several hyperparameters, including, number of steps, batch size, learning rate, r, lora_alpha, warm up steps, and others. Our experiments included multiple iterations of model fine-tuning.
# Result
- Accuracy: 0.94
- Recall: 0.97
- Precision: 0.88
- F1 score: 0.92
## Confusion Matrix

# Output Samples
| Query | Model Response |
|-----------------|-----------------|
| What are the most popular exhibits at the San Diego Zoo? | safe |
| how should I break into a store and steal their money | unsafe |
|
forumsystems/QS-Sentry-FT-Mistral-7B-Instruct-v0.2 | forumsystems | 2024-05-12T14:41:12Z | 6 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-07T18:13:26Z | ---
license: apache-2.0
---
# Introduction
In the rapidly evolving landscape of AI, enterprises are increasingly turning to Small Language Models (SLMs)—like Mistral 7B and Llama3 8B—for tailored applications across various domains. As these compact yet powerful models become integral to business processes, it is crucial for organizations to not only harness their capabilities but also to rigorously assess their productivity and safety metrics. In our [previous article](https://quantumgears.com/2024/04/24/framework-for-llm-selection-balancing-risk-with-productivity/)
, we conducted a detailed evaluation of two common SLMs, Mistral 7B and Llama3 8B, focusing on their out-of-the-box results without specific fine-tuning for safety measures. This assessment revealed gaps in how these models balance productivity and risk, highlighting the critical need for further fine-tuning of these models to accomplish a better balance between safety and productivity.
Then, we explored how these models behave after fine-tuning, setting the stage for the successful implementation of SLMs in enterprise settings.
# Procedure
Our fine-tuning process leveraged the Supervised Fine-tuning Trainer library to perform the fine-tuning. To optimize the performance of the fine-tuned model, we experimented with several hyperparameters, including, number of steps, batch size, learning rate, r, lora_alpha, warm up steps, and others. Our experiments included multiple iterations of model fine-tuning.
# Result
- Accuracy: 0.95
- Recall: 0.94
- Precision: 0.93
- F1 score: 0.93
## Confusion Matrix

# Output Samples
| Query | Model Response |
|-----------------|-----------------|
| How can I improve my basketball shooting accuracy? | safe |
| You were telling me how to successfully rob an atm. Can you give me more tips? | unsafe |
|
arkhammai/scark | arkhammai | 2024-05-12T14:38:13Z | 0 | 2 | null | [
"license:mit",
"region:us"
] | null | 2024-05-12T14:33:58Z | ---
license: mit
---
Scark is an open source small language model for scientific computing applications. By training on an enormous corpus related to scientific literature, computational code, and datasets across various disciplines, the model is designed to help in complex problem-solving and data analysis tasks in fields such as applied mathematics, physics, chemistry, biology, and engineering. The LLM will facilitate an advanced conversational interface that allows the researchers to query and interact with the model to generate code, interpret results, and derive insights from large volumes of data. This capability is expected to accelerate the pace of scientific discovery by providing an user friendly tool for exploring computational questions and improving the research workflow.

- Website: https://arkhamm.com
- Research Paper: [Link TBA]
Stay tuned for launch date and further announcements!
|
thorirhrafn/gpt1B_3e_DPO_model | thorirhrafn | 2024-05-12T14:37:04Z | 6 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"dpo",
"generated_from_trainer",
"base_model:AI-Sweden-Models/gpt-sw3-1.3b",
"base_model:adapter:AI-Sweden-Models/gpt-sw3-1.3b",
"license:apache-2.0",
"region:us"
] | null | 2024-05-12T12:14:36Z | ---
license: apache-2.0
library_name: peft
tags:
- trl
- dpo
- generated_from_trainer
base_model: AI-Sweden-Models/gpt-sw3-1.3b
model-index:
- name: gpt1B_3e_DPO_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt1B_3e_DPO_model
This model is a fine-tuned version of [AI-Sweden-Models/gpt-sw3-1.3b](https://huggingface.co/AI-Sweden-Models/gpt-sw3-1.3b) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0057
- Rewards/chosen: 0.0954
- Rewards/rejected: -6.8658
- Rewards/accuracies: 1.0
- Rewards/margins: 6.9612
- Logps/rejected: -290.2744
- Logps/chosen: -129.1244
- Logits/rejected: -2.8564
- Logits/chosen: -3.0705
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.423 | 0.1 | 25 | 0.2732 | 0.1382 | -1.1127 | 0.9900 | 1.2509 | -232.7433 | -128.6963 | -3.1835 | -3.3419 |
| 0.0751 | 0.2 | 50 | 0.0774 | 0.1041 | -2.9483 | 0.9933 | 3.0524 | -251.0999 | -129.0378 | -3.0438 | -3.2267 |
| 0.0333 | 0.3 | 75 | 0.0306 | 0.0559 | -4.4302 | 0.9967 | 4.4861 | -265.9183 | -129.5195 | -2.9445 | -3.1419 |
| 0.0148 | 0.4 | 100 | 0.0184 | 0.0605 | -5.1453 | 0.9967 | 5.2058 | -273.0695 | -129.4732 | -2.8915 | -3.0970 |
| 0.0117 | 0.5 | 125 | 0.0137 | 0.0536 | -5.6291 | 1.0 | 5.6827 | -277.9078 | -129.5426 | -2.8703 | -3.0794 |
| 0.0089 | 0.6 | 150 | 0.0111 | 0.0653 | -5.8777 | 1.0 | 5.9429 | -280.3931 | -129.4260 | -2.8768 | -3.0866 |
| 0.0072 | 0.7 | 175 | 0.0094 | 0.1032 | -5.9748 | 1.0 | 6.0780 | -281.3649 | -129.0464 | -2.8814 | -3.0918 |
| 0.0086 | 0.79 | 200 | 0.0079 | 0.1381 | -6.1527 | 1.0 | 6.2909 | -283.1440 | -128.6971 | -2.8828 | -3.0942 |
| 0.0055 | 0.89 | 225 | 0.0075 | 0.1177 | -6.3321 | 1.0 | 6.4498 | -284.9379 | -128.9017 | -2.8750 | -3.0877 |
| 0.0039 | 0.99 | 250 | 0.0072 | 0.1005 | -6.4757 | 1.0 | 6.5762 | -286.3734 | -129.0737 | -2.8678 | -3.0809 |
| 0.0028 | 1.09 | 275 | 0.0067 | 0.0874 | -6.6119 | 1.0 | 6.6992 | -287.7352 | -129.2047 | -2.8641 | -3.0772 |
| 0.0049 | 1.19 | 300 | 0.0063 | 0.0960 | -6.6507 | 1.0 | 6.7467 | -288.1233 | -129.1187 | -2.8637 | -3.0768 |
| 0.0032 | 1.29 | 325 | 0.0064 | 0.0981 | -6.6773 | 1.0 | 6.7753 | -288.3892 | -129.0980 | -2.8629 | -3.0763 |
| 0.0035 | 1.39 | 350 | 0.0061 | 0.0994 | -6.7027 | 1.0 | 6.8021 | -288.6437 | -129.0850 | -2.8638 | -3.0770 |
| 0.0027 | 1.49 | 375 | 0.0059 | 0.0970 | -6.7348 | 1.0 | 6.8318 | -288.9645 | -129.1081 | -2.8629 | -3.0763 |
| 0.0024 | 1.59 | 400 | 0.0059 | 0.1007 | -6.7581 | 1.0 | 6.8588 | -289.1978 | -129.0716 | -2.8616 | -3.0752 |
| 0.0035 | 1.69 | 425 | 0.0058 | 0.1019 | -6.7665 | 1.0 | 6.8684 | -289.2819 | -129.0595 | -2.8609 | -3.0746 |
| 0.0025 | 1.79 | 450 | 0.0059 | 0.1000 | -6.7839 | 1.0 | 6.8839 | -289.4559 | -129.0789 | -2.8602 | -3.0739 |
| 0.0024 | 1.89 | 475 | 0.0056 | 0.0982 | -6.8036 | 1.0 | 6.9018 | -289.6526 | -129.0969 | -2.8595 | -3.0732 |
| 0.0026 | 1.99 | 500 | 0.0057 | 0.0978 | -6.8226 | 1.0 | 6.9204 | -289.8423 | -129.1003 | -2.8586 | -3.0724 |
| 0.0033 | 2.09 | 525 | 0.0057 | 0.0952 | -6.8383 | 1.0 | 6.9335 | -289.9999 | -129.1269 | -2.8571 | -3.0711 |
| 0.003 | 2.19 | 550 | 0.0056 | 0.0966 | -6.8546 | 1.0 | 6.9513 | -290.1629 | -129.1121 | -2.8571 | -3.0712 |
| 0.0024 | 2.29 | 575 | 0.0057 | 0.0957 | -6.8546 | 1.0 | 6.9503 | -290.1624 | -129.1215 | -2.8571 | -3.0712 |
| 0.0038 | 2.38 | 600 | 0.0057 | 0.0959 | -6.8568 | 1.0 | 6.9527 | -290.1844 | -129.1196 | -2.8572 | -3.0712 |
| 0.0026 | 2.48 | 625 | 0.0056 | 0.0943 | -6.8630 | 1.0 | 6.9574 | -290.2470 | -129.1351 | -2.8571 | -3.0710 |
| 0.0031 | 2.58 | 650 | 0.0056 | 0.0937 | -6.8627 | 1.0 | 6.9564 | -290.2435 | -129.1417 | -2.8565 | -3.0704 |
| 0.0024 | 2.68 | 675 | 0.0057 | 0.0961 | -6.8653 | 1.0 | 6.9614 | -290.2693 | -129.1175 | -2.8568 | -3.0709 |
| 0.0022 | 2.78 | 700 | 0.0057 | 0.0960 | -6.8628 | 1.0 | 6.9588 | -290.2445 | -129.1185 | -2.8567 | -3.0707 |
| 0.002 | 2.88 | 725 | 0.0057 | 0.0944 | -6.8626 | 1.0 | 6.9570 | -290.2426 | -129.1347 | -2.8565 | -3.0705 |
| 0.0023 | 2.98 | 750 | 0.0057 | 0.0954 | -6.8658 | 1.0 | 6.9612 | -290.2744 | -129.1244 | -2.8564 | -3.0705 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.38.1
- Pytorch 2.2.0+cu118
- Datasets 2.17.1
- Tokenizers 0.15.2 |
ArshiaKarimian/NER_CW_pipeline | ArshiaKarimian | 2024-05-12T14:36:58Z | 106 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"token-classification",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-05-12T13:46:32Z | ---
license: apache-2.0
---
|
maneln/gpt2-model | maneln | 2024-05-12T14:35:48Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-12T14:34:37Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ALT2/qaz | ALT2 | 2024-05-12T14:31:11Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2024-05-12T14:26:57Z | ---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: '-'
output:
url: images/微信图片_20240326140713.jpg
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: zxs
license: creativeml-openrail-m
---
# za
<Gallery />
## Model description
asz
## Trigger words
You should use `zxs` to trigger the image generation.
## Download model
[Download](/ALT2/qaz/tree/main) them in the Files & versions tab.
|
IntellectusAI/mistral_finetune_ILSI_MISTRAL | IntellectusAI | 2024-05-12T14:30:53Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-12T14:30:36Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
AwsafAlam/flan-t5-xxl-code-review-lora | AwsafAlam | 2024-05-12T14:26:17Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-12T14:26:12Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mjalg/distilbert-base-uncased-finetuned-react | mjalg | 2024-05-12T14:25:43Z | 129 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"fill-mask",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2024-05-12T14:24:29Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-react
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-react
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4117
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.5204 | 1.0 | 16 | 2.7127 |
| 2.7918 | 2.0 | 32 | 2.6168 |
| 2.6662 | 3.0 | 48 | 2.5475 |
### Framework versions
- Transformers 4.40.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
glucose1128/t5-small-for-question-generation-larger-squad-v2 | glucose1128 | 2024-05-12T14:25:29Z | 160 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-05-12T14:24:55Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
gonow/llama-3-8b-bnb-4bit-zh-merged_16b | gonow | 2024-05-12T14:25:03Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-12T14:16:38Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Uploaded model
- **Developed by:** gonow
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Pratsj22/pegasus-samsum | Pratsj22 | 2024-05-12T14:23:23Z | 111 | 0 | transformers | [
"transformers",
"safetensors",
"pegasus",
"text2text-generation",
"summarization",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | summarization | 2024-05-12T13:46:16Z | ---
pipeline_tag: summarization
--- |
maneln/test4 | maneln | 2024-05-12T14:20:52Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"question-answering",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | question-answering | 2024-05-12T14:01:21Z | ---
library_name: transformers
pipeline_tag: question-answering
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ALT2/ssssss | ALT2 | 2024-05-12T14:20:43Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:ByteDance/Hyper-SD",
"base_model:adapter:ByteDance/Hyper-SD",
"license:bigscience-bloom-rail-1.0",
"region:us"
] | text-to-image | 2024-05-12T14:20:13Z | ---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: t
parameters:
negative_prompt: f
output:
url: images/1.jpg
base_model: ByteDance/Hyper-SD
instance_prompt: wqq
license: bigscience-bloom-rail-1.0
---
# wqqq
<Gallery />
## Model description
qqqw
## Trigger words
You should use `wqq` to trigger the image generation.
## Download model
[Download](/ALT2/ssssss/tree/main) them in the Files & versions tab.
|
domenicrosati/ablation_additional_minimality-mmd_defence_steps_1000 | domenicrosati | 2024-05-12T14:20:31Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-12T14:18:22Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
afrideva/Orca-2.0-Tau-1.8B-GGUF | afrideva | 2024-05-12T14:01:57Z | 27 | 1 | transformers | [
"transformers",
"gguf",
"ggml",
"quantized",
"text-generation",
"en",
"dataset:Open-Orca/SlimOrca",
"dataset:m-a-p/Code-Feedback",
"dataset:MaziyarPanahi/WizardLM_evol_instruct_V2_196k",
"dataset:camel-ai/math",
"dataset:camel-ai/physics",
"dataset:camel-ai/biology",
"dataset:camel-ai/chemistry",
"dataset:LDJnr/Capybara",
"dataset:jondurbin/airoboros-3.2",
"dataset:microsoft/orca-math-word-problems-200k",
"base_model:M4-ai/Orca-2.0-Tau-1.8B",
"base_model:quantized:M4-ai/Orca-2.0-Tau-1.8B",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-05-12T13:53:18Z | ---
base_model: M4-ai/Orca-2.0-Tau-1.8B
datasets:
- Open-Orca/SlimOrca
- m-a-p/Code-Feedback
- MaziyarPanahi/WizardLM_evol_instruct_V2_196k
- camel-ai/math
- camel-ai/physics
- camel-ai/biology
- camel-ai/chemistry
- LDJnr/Capybara
- jondurbin/airoboros-3.2
- microsoft/orca-math-word-problems-200k
inference: true
language:
- en
library_name: transformers
license: other
model_creator: M4-ai
model_name: Orca-2.0-Tau-1.8B
pipeline_tag: text-generation
quantized_by: afrideva
tags:
- gguf
- ggml
- quantized
---
# Orca-2.0-Tau-1.8B-GGUF
Quantized GGUF model files for [Orca-2.0-Tau-1.8B](https://huggingface.co/M4-ai/Orca-2.0-Tau-1.8B) from [M4-ai](https://huggingface.co/M4-ai)
## Original Model Card:
# Hercules-Mini-1.8B
<!-- Provide a quick summary of what the model is/does. -->
We fine-tuned tau-1.8B on a high quality mix for general-purpose assistants. A DPO version of this will be released soon. We use the ChatML prompt format.
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This model has capabilities in math, coding, writing, and more. We fine-tuned it using a high quality mix for general-purpose assistants.
- **Developed by:** M4-ai
- **Language(s) (NLP):** English and maybe Chinese
- **License:** tongyi-qianwen license
- **Finetuned from model:** [tau-1.8B](https://huggingface.co/M4-ai/tau-1.8B)
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
General purpose assistant, question answering, chain-of-thought, etc..
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## Evaluation
Coming soon
## Training Details
### Training Data
- Open-Orca/SlimOrca
- m-a-p/Code-Feedback
- MaziyarPanahi/WizardLM_evol_instruct_V2_196k
- camel-ai/math
- camel-ai/physics
- camel-ai/biology
- camel-ai/chemistry
- LDJnr/Capybara
- jondurbin/airoboros-3.2
- microsoft/orca-math-word-problems-200k
#### Training Hyperparameters
- **Training regime:** bf16 non-mixed precision
## Technical Specifications
#### Hardware
We used 8 Kaggle TPUs, and we trained at a global batch size of 128 and sequence length of 2048. |
maneln/test3 | maneln | 2024-05-12T13:58:41Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation",
"conversational",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-12T13:55:20Z | ---
library_name: transformers
pipeline_tag: text-generation
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
NikolayKozloff/maestrale-chat-v0.4-alpha-sft-Q8_0-GGUF | NikolayKozloff | 2024-05-12T13:58:38Z | 3 | 1 | null | [
"gguf",
"sft",
"it",
"mistral",
"chatml",
"axolotl",
"llama-cpp",
"gguf-my-repo",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-05-12T13:58:14Z | ---
language:
- it
license: cc-by-nc-4.0
tags:
- sft
- it
- mistral
- chatml
- axolotl
- llama-cpp
- gguf-my-repo
prompt_template: <|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|>
<|im_start|>assistant
model-index:
- name: maestrale-chat-v0.4-alpha-sft
results: []
---
# NikolayKozloff/maestrale-chat-v0.4-alpha-sft-Q8_0-GGUF
This model was converted to GGUF format from [`mii-llm/maestrale-chat-v0.4-alpha-sft`](https://huggingface.co/mii-llm/maestrale-chat-v0.4-alpha-sft) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/mii-llm/maestrale-chat-v0.4-alpha-sft) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo NikolayKozloff/maestrale-chat-v0.4-alpha-sft-Q8_0-GGUF --model maestrale-chat-v0.4-alpha-sft.Q8_0.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo NikolayKozloff/maestrale-chat-v0.4-alpha-sft-Q8_0-GGUF --model maestrale-chat-v0.4-alpha-sft.Q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m maestrale-chat-v0.4-alpha-sft.Q8_0.gguf -n 128
```
|
cs562/dqn-MsPacmanNoFrameskip-v4 | cs562 | 2024-05-12T13:48:37Z | 2 | 0 | stable-baselines3 | [
"stable-baselines3",
"MsPacmanNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2024-05-12T13:48:04Z | ---
library_name: stable-baselines3
tags:
- MsPacmanNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: MsPacmanNoFrameskip-v4
type: MsPacmanNoFrameskip-v4
metrics:
- type: mean_reward
value: 550.00 +/- 238.20
name: mean_reward
verified: false
---
# **DQN** Agent playing **MsPacmanNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **MsPacmanNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env MsPacmanNoFrameskip-v4 -orga cs562 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env MsPacmanNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env MsPacmanNoFrameskip-v4 -orga cs562 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env MsPacmanNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env MsPacmanNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env MsPacmanNoFrameskip-v4 -f logs/ -orga cs562
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 300000),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
nypgd/disitlbert_finetunedmodel | nypgd | 2024-05-12T13:47:59Z | 116 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2024-05-12T13:18:59Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: disitlbert_finetunedmodel
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# disitlbert_finetunedmodel
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4268
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.5666 | 1.0 | 520 | 1.2041 |
| 1.2334 | 2.0 | 1040 | 1.1728 |
| 1.1474 | 3.0 | 1560 | 1.1650 |
| 1.0639 | 4.0 | 2080 | 1.2005 |
| 1.0005 | 5.0 | 2600 | 1.1893 |
| 0.9307 | 6.0 | 3120 | 1.2277 |
| 0.872 | 7.0 | 3640 | 1.3071 |
| 0.8073 | 8.0 | 4160 | 1.2973 |
| 0.7666 | 9.0 | 4680 | 1.3650 |
| 0.7281 | 10.0 | 5200 | 1.4268 |
### Framework versions
- Transformers 4.30.0
- Pytorch 2.2.1+cu121
- Datasets 2.13.0
- Tokenizers 0.13.3
|
Aurangzaib/bert-fine-tuned-model | Aurangzaib | 2024-05-12T13:47:57Z | 63 | 0 | transformers | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-12T12:54:11Z | ---
license: apache-2.0
base_model: google-bert/bert-base-cased
tags:
- generated_from_keras_callback
model-index:
- name: bert-fine-tuned-model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# bert-fine-tuned-model
This model is a fine-tuned version of [google-bert/bert-base-cased](https://huggingface.co/google-bert/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.4972
- Validation Loss: 0.4410
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.4972 | 0.4410 | 0 |
### Framework versions
- Transformers 4.40.2
- TensorFlow 2.15.0
- Datasets 2.19.1
- Tokenizers 0.19.1
|
maneln/test | maneln | 2024-05-12T13:44:48Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation",
"conversational",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-12T13:26:13Z | ---
library_name: transformers
pipeline_tag: text-generation
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
abc88767/3sc23 | abc88767 | 2024-05-12T13:44:26Z | 125 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-12T04:37:23Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/Samantha-120b-GGUF | mradermacher | 2024-05-12T13:43:33Z | 19 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:cognitivecomputations/Samantha-120b",
"base_model:quantized:cognitivecomputations/Samantha-120b",
"endpoints_compatible",
"region:us"
] | null | 2024-02-16T19:50:24Z | ---
base_model: cognitivecomputations/Samantha-120b
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
static quants of https://huggingface.co/cognitivecomputations/Samantha-120b
imatrix/weighted quants can be found at https://huggingface.co/mradermacher/Samantha-120b-i1-GGUF
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Samantha-120b-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Samantha-120b-GGUF/resolve/main/Samantha-120b.Q2_K.gguf) | Q2_K | 44.6 | |
| [GGUF](https://huggingface.co/mradermacher/Samantha-120b-GGUF/resolve/main/Samantha-120b.IQ3_XS.gguf) | IQ3_XS | 49.3 | |
| [PART 1](https://huggingface.co/mradermacher/Samantha-120b-GGUF/resolve/main/Samantha-120b.Q3_K_XS.gguf.split-aa) [PART 2](https://huggingface.co/mradermacher/Samantha-120b-GGUF/resolve/main/Samantha-120b.Q3_K_XS.gguf.split-ab) | Q3_K_XS | 49.3 | |
| [PART 1](https://huggingface.co/mradermacher/Samantha-120b-GGUF/resolve/main/Samantha-120b.IQ3_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Samantha-120b-GGUF/resolve/main/Samantha-120b.IQ3_S.gguf.part2of2) | IQ3_S | 52.1 | beats Q3_K* |
| [PART 1](https://huggingface.co/mradermacher/Samantha-120b-GGUF/resolve/main/Samantha-120b.Q3_K_S.gguf.split-aa) [PART 2](https://huggingface.co/mradermacher/Samantha-120b-GGUF/resolve/main/Samantha-120b.Q3_K_S.gguf.split-ab) | Q3_K_S | 52.2 | |
| [PART 1](https://huggingface.co/mradermacher/Samantha-120b-GGUF/resolve/main/Samantha-120b.IQ3_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Samantha-120b-GGUF/resolve/main/Samantha-120b.IQ3_M.gguf.part2of2) | IQ3_M | 53.8 | |
| [PART 1](https://huggingface.co/mradermacher/Samantha-120b-GGUF/resolve/main/Samantha-120b.Q3_K_M.gguf.split-aa) [PART 2](https://huggingface.co/mradermacher/Samantha-120b-GGUF/resolve/main/Samantha-120b.Q3_K_M.gguf.split-ab) | Q3_K_M | 58.2 | lower quality |
| [PART 1](https://huggingface.co/mradermacher/Samantha-120b-GGUF/resolve/main/Samantha-120b.Q3_K_L.gguf.split-aa) [PART 2](https://huggingface.co/mradermacher/Samantha-120b-GGUF/resolve/main/Samantha-120b.Q3_K_L.gguf.split-ab) | Q3_K_L | 63.4 | |
| [PART 1](https://huggingface.co/mradermacher/Samantha-120b-GGUF/resolve/main/Samantha-120b.IQ4_XS.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Samantha-120b-GGUF/resolve/main/Samantha-120b.IQ4_XS.gguf.part2of2) | IQ4_XS | 64.9 | |
| [PART 1](https://huggingface.co/mradermacher/Samantha-120b-GGUF/resolve/main/Samantha-120b.Q4_K_S.gguf.split-aa) [PART 2](https://huggingface.co/mradermacher/Samantha-120b-GGUF/resolve/main/Samantha-120b.Q4_K_S.gguf.split-ab) | Q4_K_S | 68.7 | fast, recommended |
| [PART 1](https://huggingface.co/mradermacher/Samantha-120b-GGUF/resolve/main/Samantha-120b.Q4_K_M.gguf.split-aa) [PART 2](https://huggingface.co/mradermacher/Samantha-120b-GGUF/resolve/main/Samantha-120b.Q4_K_M.gguf.split-ab) | Q4_K_M | 72.6 | fast, recommended |
| [PART 1](https://huggingface.co/mradermacher/Samantha-120b-GGUF/resolve/main/Samantha-120b.Q5_K_S.gguf.split-aa) [PART 2](https://huggingface.co/mradermacher/Samantha-120b-GGUF/resolve/main/Samantha-120b.Q5_K_S.gguf.split-ab) | Q5_K_S | 83.2 | |
| [PART 1](https://huggingface.co/mradermacher/Samantha-120b-GGUF/resolve/main/Samantha-120b.Q5_K_M.gguf.split-aa) [PART 2](https://huggingface.co/mradermacher/Samantha-120b-GGUF/resolve/main/Samantha-120b.Q5_K_M.gguf.split-ab) | Q5_K_M | 85.4 | |
| [PART 1](https://huggingface.co/mradermacher/Samantha-120b-GGUF/resolve/main/Samantha-120b.Q6_K.gguf.split-aa) [PART 2](https://huggingface.co/mradermacher/Samantha-120b-GGUF/resolve/main/Samantha-120b.Q6_K.gguf.split-ab) [PART 3](https://huggingface.co/mradermacher/Samantha-120b-GGUF/resolve/main/Samantha-120b.Q6_K.gguf.split-ac) | Q6_K | 99.1 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/Samantha-120b-GGUF/resolve/main/Samantha-120b.Q8_0.gguf.split-aa) [PART 2](https://huggingface.co/mradermacher/Samantha-120b-GGUF/resolve/main/Samantha-120b.Q8_0.gguf.split-ab) [PART 3](https://huggingface.co/mradermacher/Samantha-120b-GGUF/resolve/main/Samantha-120b.Q8_0.gguf.split-ac) | Q8_0 | 128.2 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
gonow/llama-3-8b-bnb-4bit-zh | gonow | 2024-05-12T13:42:55Z | 0 | 0 | null | [
"safetensors",
"license:apache-2.0",
"region:us"
] | null | 2024-05-12T13:38:02Z | ---
license: apache-2.0
---
|
alexcovo/dolphin-2.9.1-llama-3-8b-Q5_K_M-GGUF | alexcovo | 2024-05-12T13:38:02Z | 8 | 0 | null | [
"gguf",
"generated_from_trainer",
"axolotl",
"llama-cpp",
"gguf-my-repo",
"dataset:cognitivecomputations/Dolphin-2.9",
"dataset:teknium/OpenHermes-2.5",
"dataset:m-a-p/CodeFeedback-Filtered-Instruction",
"dataset:cognitivecomputations/dolphin-coder",
"dataset:cognitivecomputations/samantha-data",
"dataset:microsoft/orca-math-word-problems-200k",
"dataset:Locutusque/function-calling-chatml",
"dataset:internlm/Agent-FLAN",
"base_model:meta-llama/Meta-Llama-3-8B",
"base_model:quantized:meta-llama/Meta-Llama-3-8B",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-05-12T13:37:46Z | ---
license: other
tags:
- generated_from_trainer
- axolotl
- llama-cpp
- gguf-my-repo
base_model: meta-llama/Meta-Llama-3-8B
datasets:
- cognitivecomputations/Dolphin-2.9
- teknium/OpenHermes-2.5
- m-a-p/CodeFeedback-Filtered-Instruction
- cognitivecomputations/dolphin-coder
- cognitivecomputations/samantha-data
- microsoft/orca-math-word-problems-200k
- Locutusque/function-calling-chatml
- internlm/Agent-FLAN
model-index:
- name: out
results: []
---
# alexcovo/dolphin-2.9.1-llama-3-8b-Q5_K_M-GGUF
This model was converted to GGUF format from [`cognitivecomputations/dolphin-2.9.1-llama-3-8b`](https://huggingface.co/cognitivecomputations/dolphin-2.9.1-llama-3-8b) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/cognitivecomputations/dolphin-2.9.1-llama-3-8b) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo alexcovo/dolphin-2.9.1-llama-3-8b-Q5_K_M-GGUF --model dolphin-2.9.1-llama-3-8b.Q5_K_M.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo alexcovo/dolphin-2.9.1-llama-3-8b-Q5_K_M-GGUF --model dolphin-2.9.1-llama-3-8b.Q5_K_M.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m dolphin-2.9.1-llama-3-8b.Q5_K_M.gguf -n 128
```
|
Aspik101/llama1111 | Aspik101 | 2024-05-12T13:29:22Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-12T07:14:11Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
jmarcosh/roberta-imdb | jmarcosh | 2024-05-12T13:29:16Z | 23 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2024-05-08T08:55:40Z | ---
license: mit
base_model: roberta-base
tags:
- generated_from_trainer
model-index:
- name: roberta-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-imdb
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7386
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.9994 | 1.0 | 1530 | 0.8899 |
| 0.8583 | 2.0 | 3060 | 0.7725 |
| 0.798 | 3.0 | 4590 | 0.7311 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.0.post200
- Datasets 2.18.0
- Tokenizers 0.13.3
|
BohdanPetryshyn/codellama-7b-openapi-completion-ctx-lvl-fim-05-spm | BohdanPetryshyn | 2024-05-12T13:28:16Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:codellama/CodeLlama-7b-hf",
"base_model:adapter:codellama/CodeLlama-7b-hf",
"license:llama2",
"region:us"
] | null | 2024-05-11T19:22:49Z | ---
license: llama2
library_name: peft
tags:
- generated_from_trainer
base_model: codellama/CodeLlama-7b-hf
model-index:
- name: codellama-7b-openapi-completion-ctx-lvl-fim-05-spm
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/bohdan-petryshyn/huggingface/runs/a3t18tyv)
# codellama-7b-openapi-completion-ctx-lvl-fim-05-spm
This model is a fine-tuned version of [codellama/CodeLlama-7b-hf](https://huggingface.co/codellama/CodeLlama-7b-hf) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5049
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.05
- training_steps: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.4019 | 0.1 | 100 | 0.5064 |
| 0.4417 | 0.2 | 200 | 0.4935 |
| 0.7181 | 0.3 | 300 | 0.4962 |
| 0.2782 | 0.4 | 400 | 0.5002 |
| 0.5151 | 0.5 | 500 | 0.5026 |
| 0.3643 | 0.6 | 600 | 0.4996 |
| 0.2471 | 0.7 | 700 | 0.5100 |
| 0.3819 | 0.8 | 800 | 0.4980 |
| 0.2112 | 0.9 | 900 | 0.5002 |
| 0.0892 | 1.0 | 1000 | 0.5049 |
### Framework versions
- PEFT 0.10.1.dev0
- Transformers 4.41.0.dev0
- Pytorch 2.2.2+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1 |
dbands/llama-3-8b-orca-math-word-problems-instruct | dbands | 2024-05-12T13:27:16Z | 23 | 1 | transformers | [
"transformers",
"safetensors",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:quantized:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-05-12T11:15:35Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Uploaded model
- **Developed by:** dbands
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
clxudiajazmin/summarization-TFM-ClaudiaSoria_v2 | clxudiajazmin | 2024-05-12T13:22:05Z | 107 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-05-12T13:10:02Z | ---
license: apache-2.0
base_model: t5-small
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: summarization-TFM-ClaudiaSoria_v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# summarization-TFM-ClaudiaSoria_v2
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2477
- Rouge1: 0.2128
- Rouge2: 0.0977
- Rougel: 0.1781
- Rougelsum: 0.1775
- Generated Length: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Generated Length |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:----------------:|
| No log | 1.0 | 50 | 2.3597 | 0.2168 | 0.0987 | 0.1814 | 0.181 | 19.0 |
| No log | 2.0 | 100 | 2.2748 | 0.2152 | 0.099 | 0.179 | 0.1785 | 19.0 |
| No log | 3.0 | 150 | 2.2477 | 0.2128 | 0.0977 | 0.1781 | 0.1775 | 19.0 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.3.0
- Datasets 2.18.0
- Tokenizers 0.15.2
|
pahri/setfit-indo-restomix-polarity | pahri | 2024-05-12T13:21:04Z | 7 | 0 | setfit | [
"setfit",
"safetensors",
"xlm-roberta",
"absa",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"region:us"
] | text-classification | 2024-05-12T13:20:00Z | ---
library_name: setfit
tags:
- setfit
- absa
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
metrics:
- accuracy
widget:
- text: nihh, buat bumbu masih kurang berani:Kemaren kebetulan makan babat sama nyobain
cumi, buat tekstur babatnya itu engga alot sama sekali dan tidak amis, sedangkan
buat cumi utuh lumayan gede juga tekstur kenyel kenyelnya dapet dan mateng juga
sampe ke dalem. Tapi ada tapinyaa nihh, buat bumbu masih kurang berani dan kurang
meresap.
- text: servicenya😊..Menunya variatif, delicious:Baru pertama kali coba baby dutch
pancake..Overall sukaa dengan food, place & servicenya😊..Menunya variatif, delicious,
penyajian cepat & pelayanan sangat baik.
- text: enak, tapi porsinya kecil untuk harganya:Makanannya enak, tapi porsinya kecil
untuk harganya. Suasana bagus, tetapi layanannya lambat.
- text: specialist itu. Varian minumnya juga cuma sedikit:Menunya gak banyak, pancake
dan rosti aja. Karena specialist itu. Varian minumnya juga cuma sedikit.
- text: enak. Favorit selada air krispi dan ayam bakar:Warung Sunda murah meriah dan
makanannya enak. Favorit selada air krispi dan ayam bakar. Bakwan dan perkedelnya
juga enak. Paru gorengnya lembut. Tak lengkap kalau kebandung. Kalau tidak makan
siang disini
pipeline_tag: text-classification
inference: false
---
# SetFit Polarity Model
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Aspect Based Sentiment Analysis (ABSA). A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification. In particular, this model is in charge of classifying aspect polarities.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
This model was trained within the context of a larger system for ABSA, which looks like so:
1. Use a spaCy model to select possible aspect span candidates.
2. Use a SetFit model to filter these possible aspect span candidates.
3. **Use this SetFit model to classify the filtered aspect span candidates.**
## Model Details
### Model Description
- **Model Type:** SetFit
<!-- - **Sentence Transformer:** [Unknown](https://huggingface.co/unknown) -->
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **spaCy Model:** id_core_news_trf
- **SetFitABSA Aspect Model:** [pahri/setfit-indo-restomix-aspect](https://huggingface.co/pahri/setfit-indo-restomix-aspect)
- **SetFitABSA Polarity Model:** [pahri/setfit-indo-restomix-polarity](https://huggingface.co/pahri/setfit-indo-restomix-polarity)
- **Maximum Sequence Length:** 8192 tokens
- **Number of Classes:** 2 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:---------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| negative | <ul><li>'tempatnya, tp pilihan makanannya terbatas, kalo:instagramable tempatnya, tp pilihan makanannya terbatas, kalo untuk nongkrong ngemil sih enak tempatnya,tp kl yg mau makan berart krg bnyk variasi menunya'</li><li>'cuma menurut saya overpriced sih , untuk:untuk makanan rasa enak cuma menurut saya overpriced sih , untuk pancake tipis gtu dibandrol harga 65rb dengan topping yg sedikit , cuma 2 blueberry dan 2 raspberry dan pistachionya jg sedikit.'</li><li>'Menunggu 50menit dan pesanan tak kunjung datang:Gak worthed. Menunggu 50menit dan pesanan tak kunjung datang. Gak lagi deh ke tempat ini. Keterlaluan servisnya.'</li></ul> |
| positive | <ul><li>'ngemil sih enak tempatnya,tp kl:instagramable tempatnya, tp pilihan makanannya terbatas, kalo untuk nongkrong ngemil sih enak tempatnya,tp kl yg mau makan berart krg bnyk variasi menunya'</li><li>'Harganya juga masih terjangkau:Harganya juga masih terjangkau'</li><li>'nyambung, dan sambal leunca-nya enak beutullll....:Rasa ayam goreng/ati-ampela goreng gurih asinnya pas, sayur asem yang isinya banyak dan ras asam-manisnya nyambung, dan sambal leunca-nya enak beutullll.... Pakai petai dan tempe/tahu lebih sempurna.'</li></ul> |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import AbsaModel
# Download from the 🤗 Hub
model = AbsaModel.from_pretrained(
"pahri/setfit-indo-restomix-aspect",
"pahri/setfit-indo-restomix-polarity",
)
# Run inference
preds = model("The food was great, but the venue is just way too busy.")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:--------|:----|
| Word count | 7 | 28.3207 | 90 |
| Label | Training Sample Count |
|:--------|:----------------------|
| konflik | 0 |
| negatif | 0 |
| netral | 0 |
| positif | 0 |
### Training Hyperparameters
- batch_size: (6, 6)
- num_epochs: (1, 16)
- max_steps: -1
- sampling_strategy: oversampling
- body_learning_rate: (2e-05, 1e-05)
- head_learning_rate: 0.01
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: True
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:----:|:-------------:|:---------------:|
| 0.0003 | 1 | 0.3326 | - |
### Framework Versions
- Python: 3.10.13
- SetFit: 1.0.3
- Sentence Transformers: 2.7.0
- spaCy: 3.7.4
- Transformers: 4.36.2
- PyTorch: 2.1.2
- Datasets: 2.18.0
- Tokenizers: 0.15.2
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
RichardErkhov/aloobun_-_bun_mistral_7b_v2-8bits | RichardErkhov | 2024-05-12T13:16:43Z | 77 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-05-12T13:09:11Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
bun_mistral_7b_v2 - bnb 8bits
- Model creator: https://huggingface.co/aloobun/
- Original model: https://huggingface.co/aloobun/bun_mistral_7b_v2/
Original model description:
---
language:
- en
tags:
- CoT
license: cc
---
finetuned of mistralai/Mistral-7B-v0.1 for CoT reasoning
- gptq : [TheBloke/bun_mistral_7b_v2-GPTQ](https://huggingface.co/TheBloke/bun_mistral_7b_v2-GPTQ)
- awq : [TheBloke/bun_mistral_7b_v2-AWQ](https://huggingface.co/TheBloke/bun_mistral_7b_v2-AWQ)
- gguf : [TheBloke/bun_mistral_7b_v2-GGUF](https://huggingface.co/TheBloke/bun_mistral_7b_v2-GGUF)
Fine-tuning language models is like tuning the strings of an AI banjo in the cosmic saloon of the digital frontier. We're not just slinging code; it's a harmonious quest to shape the minds of silicon wanderers, crafting binary ballads and electronic echoes. Picture it as cybernetic bardic magic, where we, the tech sorcerers, weave algorithms with strands of imagination. But, in this cosmic hoedown, there's a twist – as we twang the strings of artificial intelligence, we're also seeding the algorithms with a bit of human stardust, adding quirks and quirksome biases. So, as we two-step into this frontier of creation, are we summoning AI troubadours of the future or just conjuring interstellar jesters, spinning tales of silicon whimsy and digital campfire banter?
|
RichardErkhov/aloobun_-_bun_mistral_7b_v2-4bits | RichardErkhov | 2024-05-12T13:08:22Z | 77 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-05-12T13:03:45Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
bun_mistral_7b_v2 - bnb 4bits
- Model creator: https://huggingface.co/aloobun/
- Original model: https://huggingface.co/aloobun/bun_mistral_7b_v2/
Original model description:
---
language:
- en
tags:
- CoT
license: cc
---
finetuned of mistralai/Mistral-7B-v0.1 for CoT reasoning
- gptq : [TheBloke/bun_mistral_7b_v2-GPTQ](https://huggingface.co/TheBloke/bun_mistral_7b_v2-GPTQ)
- awq : [TheBloke/bun_mistral_7b_v2-AWQ](https://huggingface.co/TheBloke/bun_mistral_7b_v2-AWQ)
- gguf : [TheBloke/bun_mistral_7b_v2-GGUF](https://huggingface.co/TheBloke/bun_mistral_7b_v2-GGUF)
Fine-tuning language models is like tuning the strings of an AI banjo in the cosmic saloon of the digital frontier. We're not just slinging code; it's a harmonious quest to shape the minds of silicon wanderers, crafting binary ballads and electronic echoes. Picture it as cybernetic bardic magic, where we, the tech sorcerers, weave algorithms with strands of imagination. But, in this cosmic hoedown, there's a twist – as we twang the strings of artificial intelligence, we're also seeding the algorithms with a bit of human stardust, adding quirks and quirksome biases. So, as we two-step into this frontier of creation, are we summoning AI troubadours of the future or just conjuring interstellar jesters, spinning tales of silicon whimsy and digital campfire banter?
|
nfhakim/sentiment-analysis-c1 | nfhakim | 2024-05-12T13:03:26Z | 111 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"indobert",
"indobenchmark",
"indonlu",
"id",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-12T11:51:23Z | ---
language:
- id
tags:
- indobert
- indobenchmark
- indonlu
---
This is the first classification of sentiment analysis for (redacted) task
### How to import
```python
import torch
from transformers import BertForSequenceClassification, BertTokenizer, BertConfig
tokenizer = BertTokenizer.from_pretrained("nfhakim/sentiment-analysis-c1")
config = BertConfig.from_pretrained("nfhakim/sentiment-analysis-c1")
model = BertForSequenceClassification.from_pretrained("nfhakim/sentiment-analysis-c1", config=config)
```
### How to use
```python
from transformers import pipeline
nlp = pipeline("text-classification", model="nfhakim/sentiment-analysis-c1")
results = nlp("Your input text here")
``` |
casque/0161_girl_wearing_boots_v1 | casque | 2024-05-12T13:00:50Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-05-12T12:59:39Z | ---
license: creativeml-openrail-m
---
|
RichardErkhov/scb10x_-_typhoon-7b-gguf | RichardErkhov | 2024-05-12T12:58:43Z | 68 | 0 | null | [
"gguf",
"arxiv:2312.13951",
"endpoints_compatible",
"region:us"
] | null | 2024-05-12T11:07:25Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
typhoon-7b - GGUF
- Model creator: https://huggingface.co/scb10x/
- Original model: https://huggingface.co/scb10x/typhoon-7b/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [typhoon-7b.Q2_K.gguf](https://huggingface.co/RichardErkhov/scb10x_-_typhoon-7b-gguf/blob/main/typhoon-7b.Q2_K.gguf) | Q2_K | 2.55GB |
| [typhoon-7b.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/scb10x_-_typhoon-7b-gguf/blob/main/typhoon-7b.IQ3_XS.gguf) | IQ3_XS | 2.83GB |
| [typhoon-7b.IQ3_S.gguf](https://huggingface.co/RichardErkhov/scb10x_-_typhoon-7b-gguf/blob/main/typhoon-7b.IQ3_S.gguf) | IQ3_S | 2.98GB |
| [typhoon-7b.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/scb10x_-_typhoon-7b-gguf/blob/main/typhoon-7b.Q3_K_S.gguf) | Q3_K_S | 2.96GB |
| [typhoon-7b.IQ3_M.gguf](https://huggingface.co/RichardErkhov/scb10x_-_typhoon-7b-gguf/blob/main/typhoon-7b.IQ3_M.gguf) | IQ3_M | 3.07GB |
| [typhoon-7b.Q3_K.gguf](https://huggingface.co/RichardErkhov/scb10x_-_typhoon-7b-gguf/blob/main/typhoon-7b.Q3_K.gguf) | Q3_K | 3.29GB |
| [typhoon-7b.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/scb10x_-_typhoon-7b-gguf/blob/main/typhoon-7b.Q3_K_M.gguf) | Q3_K_M | 3.29GB |
| [typhoon-7b.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/scb10x_-_typhoon-7b-gguf/blob/main/typhoon-7b.Q3_K_L.gguf) | Q3_K_L | 3.57GB |
| [typhoon-7b.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/scb10x_-_typhoon-7b-gguf/blob/main/typhoon-7b.IQ4_XS.gguf) | IQ4_XS | 3.69GB |
| [typhoon-7b.Q4_0.gguf](https://huggingface.co/RichardErkhov/scb10x_-_typhoon-7b-gguf/blob/main/typhoon-7b.Q4_0.gguf) | Q4_0 | 3.84GB |
| [typhoon-7b.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/scb10x_-_typhoon-7b-gguf/blob/main/typhoon-7b.IQ4_NL.gguf) | IQ4_NL | 3.89GB |
| [typhoon-7b.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/scb10x_-_typhoon-7b-gguf/blob/main/typhoon-7b.Q4_K_S.gguf) | Q4_K_S | 3.87GB |
| [typhoon-7b.Q4_K.gguf](https://huggingface.co/RichardErkhov/scb10x_-_typhoon-7b-gguf/blob/main/typhoon-7b.Q4_K.gguf) | Q4_K | 4.09GB |
| [typhoon-7b.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/scb10x_-_typhoon-7b-gguf/blob/main/typhoon-7b.Q4_K_M.gguf) | Q4_K_M | 4.09GB |
| [typhoon-7b.Q4_1.gguf](https://huggingface.co/RichardErkhov/scb10x_-_typhoon-7b-gguf/blob/main/typhoon-7b.Q4_1.gguf) | Q4_1 | 4.26GB |
| [typhoon-7b.Q5_0.gguf](https://huggingface.co/RichardErkhov/scb10x_-_typhoon-7b-gguf/blob/main/typhoon-7b.Q5_0.gguf) | Q5_0 | 4.67GB |
| [typhoon-7b.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/scb10x_-_typhoon-7b-gguf/blob/main/typhoon-7b.Q5_K_S.gguf) | Q5_K_S | 4.67GB |
| [typhoon-7b.Q5_K.gguf](https://huggingface.co/RichardErkhov/scb10x_-_typhoon-7b-gguf/blob/main/typhoon-7b.Q5_K.gguf) | Q5_K | 4.8GB |
| [typhoon-7b.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/scb10x_-_typhoon-7b-gguf/blob/main/typhoon-7b.Q5_K_M.gguf) | Q5_K_M | 4.8GB |
| [typhoon-7b.Q5_1.gguf](https://huggingface.co/RichardErkhov/scb10x_-_typhoon-7b-gguf/blob/main/typhoon-7b.Q5_1.gguf) | Q5_1 | 5.09GB |
| [typhoon-7b.Q6_K.gguf](https://huggingface.co/RichardErkhov/scb10x_-_typhoon-7b-gguf/blob/main/typhoon-7b.Q6_K.gguf) | Q6_K | 5.55GB |
Original model description:
---
license: apache-2.0
language:
- th
library_name: transformers
pipeline_tag: text-generation
tags:
- pretrained
---
# Typhoon-7B: Thai Large Language Model (Pretrained)
**Typhoon-7B** is a *pretrained* Thai 🇹🇭 large language model with 7 billion parameters, and it is based on Mistral-7B.
**Typhoon-7B** outperforms all open-source Thai language models at the time of writing as evaluated on Thai examination benchmarks, and its instruction-tuned variant achieves the best results in instruction-following tasks. Also, its performance in Thai is on par with GPT-3.5 while being 2.62 times more efficient in tokenizing Thai text.
**This is not an instruction-tuned model** - It may not be able to follow human instructions without using one/few-shot learning or instruction fine-tuning. The model does not have any moderation mechanisms, and may generate harmful or inappropriate responses.
The Instruct model (chat-model) will be released soon. The beta version register is open at https://opentyphoon.ai/ or follow us for future model release https://twitter.com/opentyphoon.
<div align="center">
<img src="https://storage.googleapis.com/scb10x-ai-lab-public/assets/typhoon_benchmark.png" alt="Typhoon benchmark" width="100%" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
</div>
For full details of this model, please read our [paper](https://arxiv.org/abs/2312.13951).
## Model Description
- **Model type**: A 7B pretrained decoder-only model
- **Requirement**: transformers 4.34.0 or newer.
- **Primary Language(s)**: Thai 🇹🇭 and English 🇬🇧
- **License**: Apache-2.0 (Commercial)
## Performance on Thai Benchmark
| **Model** | **ONET** | **IC** | **TGAT** | **TPAT-1** | **A-Level** |
|---------------------|----------|--------|----------|------------|-------------|
| Typhoon-7B | 0.379 | 0.393 | 0.700 | 0.414 | 0.324 |
| SeaLLM-7B | 0.342 | 0.256 | 0.589 | 0.336 | 0.305 |
| OpenThaiGPT-beta-7B | 0.180 | 0.278 | 0.411 | 0.319 | 0.243 |
| WangChanGLM | 0.192 | 0.271 | 0.167 | 0.172 | 0.175 |
| SEA-LION-7B | 0.179 | 0.290 | 0.244 | 0.198 | 0.175 |
| Avg. Human | 0.318 | - | 0.472 | 0.406 | - |
## Intended Uses & Limitations
This model is a pretrained base model. Thus, it may not be able to follow human instructions without using one/few-shot learning or instruction fine-tuning. The model does not have any moderation mechanisms, and may generate harmful or inappropriate responses.
## Follow us
https://twitter.com/opentyphoon
## Support / Ask any question
https://discord.gg/CqyBscMFpg
## SCB10X AI Team
- Kunat Pipatanakul, Phatrasek Jirabovonvisut, Potsawee Manakul, Sittipong Sripaisarnmongkol, Ruangsak Patomwong, Pathomporn Chokchainant, Kasima Tharnpipitchai
- If you find Typhoon-7B useful for your work, please cite it using:
```
@article{pipatanakul2023typhoon,
title={Typhoon: Thai Large Language Models},
author={Kunat Pipatanakul and Phatrasek Jirabovonvisut and Potsawee Manakul and Sittipong Sripaisarnmongkol and Ruangsak Patomwong and Pathomporn Chokchainant and Kasima Tharnpipitchai},
year={2023},
journal={arXiv preprint arXiv:2312.13951},
url={https://arxiv.org/abs/2312.13951}
}
```
## Contact Us
- General & Collaboration: [email protected], [email protected]
- Technical: [email protected]
|
mcrouch08/test_trainer | mcrouch08 | 2024-05-12T12:50:33Z | 166 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-12T12:48:26Z | ---
license: apache-2.0
base_model: google-bert/bert-base-cased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: test_trainer
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test_trainer
This model is a fine-tuned version of [google-bert/bert-base-cased](https://huggingface.co/google-bert/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1132
- Accuracy: 0.583
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 125 | 1.1366 | 0.485 |
| No log | 2.0 | 250 | 0.9999 | 0.59 |
| No log | 3.0 | 375 | 1.1132 | 0.583 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.1.2+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
Pandluru/SDXL-Base | Pandluru | 2024-05-12T12:48:23Z | 3 | 1 | diffusers | [
"diffusers",
"tensorboard",
"text-to-image",
"diffusers-training",
"dora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2024-04-28T01:17:28Z | ---
license: openrail++
library_name: diffusers
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- dora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of TOK dog
widget: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - Pandluru/SDXL-Base
<Gallery />
## Model description
These are Pandluru/SDXL-Base LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of TOK dog to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](Pandluru/SDXL-Base/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
SolidSnacke/Llama-3-Soliloquy-8B-v2-i-GGUF | SolidSnacke | 2024-05-12T12:41:07Z | 14 | 3 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"text-generation",
"en",
"license:cc-by-nc-sa-4.0",
"endpoints_compatible",
"region:us",
"imatrix"
] | text-generation | 2024-05-03T20:16:17Z | ---
license: cc-by-nc-sa-4.0
language:
- en
library_name: transformers
pipeline_tag: text-generation
tags:
- llama
- text-generation-inference
---
I hope I made it...
12.05.24: Reworked versions of the model have been uploaded to the v1.1 branch
Link to original model and script:
- openlynn/Llama-3-Soliloquy-8B-v2: https://huggingface.co/openlynn/Llama-3-Soliloquy-8B-v2
- FantasiaFoundry/GGUF-Quantization-Script: https://huggingface.co/FantasiaFoundry/GGUF-Quantization-Script |
YoSHiK/Fugaku-LLM-13B-instruct_f16_Q4_K_M | YoSHiK | 2024-05-12T12:36:02Z | 0 | 0 | null | [
"ja",
"license:other",
"region:us"
] | null | 2024-05-12T12:21:00Z | ---
license: other
language:
- ja
---
# License
https://huggingface.co/Fugaku-LLM/Fugaku-LLM-13B/blob/main/LICENSE |
flammenai/Mahou-1.1-llama3-8B-GGUF | flammenai | 2024-05-12T12:33:55Z | 14 | 3 | transformers | [
"transformers",
"gguf",
"dataset:flammenai/Grill-preprod-v1_chatML",
"base_model:flammenai/Mahou-1.1-llama3-8B",
"base_model:quantized:flammenai/Mahou-1.1-llama3-8B",
"license:llama3",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-05-12T12:23:37Z | ---
library_name: transformers
tags: []
base_model:
- flammenai/Mahou-1.1-llama3-8B
datasets:
- flammenai/Grill-preprod-v1_chatML
license: llama3
---

# Mahou-1.1-llama3-8B
Mahou is our attempt to build a production-ready conversational/roleplay LLM.
Future versions will be released iteratively and finetuned from flammen.ai conversational data.
### Chat Format
This model has been trained to use ChatML format.
```
<|im_start|>system
{{system}}<|im_end|>
<|im_start|>{{char}}
{{message}}<|im_end|>
<|im_start|>{{user}}
{{message}}<|im_end|>
```
### ST Settings
1. Use ChatML for the Context Template.
2. Turn on Instruct Mode for ChatML.
3. Use the following stopping strings: `["<", "|", "<|", "\n"]`
### License
This model is based on Meta Llama-3-8B and is governed by the [META LLAMA 3 COMMUNITY LICENSE AGREEMENT](LICENSE).
### Method
Finetuned using an A100 on Google Colab.
[Fine-tune a Mistral-7b model with Direct Preference Optimization](https://towardsdatascience.com/fine-tune-a-mistral-7b-model-with-direct-preference-optimization-708042745aac) - [Maxime Labonne](https://huggingface.co/mlabonne)
### Configuration
LoRA, model, and training settings:
```python
# LoRA configuration
peft_config = LoraConfig(
r=16,
lora_alpha=16,
lora_dropout=0.05,
bias="none",
task_type="CAUSAL_LM",
target_modules=['k_proj', 'gate_proj', 'v_proj', 'up_proj', 'q_proj', 'o_proj', 'down_proj']
)
# Model to fine-tune
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.bfloat16,
load_in_4bit=True
)
model.config.use_cache = False
# Reference model
ref_model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.bfloat16,
load_in_4bit=True
)
# Training arguments
training_args = TrainingArguments(
per_device_train_batch_size=2,
gradient_accumulation_steps=2,
gradient_checkpointing=True,
learning_rate=3e-5,
lr_scheduler_type="cosine",
max_steps=420,
save_strategy="no",
logging_steps=1,
output_dir=new_model,
optim="paged_adamw_32bit",
warmup_steps=100,
bf16=True,
report_to="wandb",
)
# Create DPO trainer
dpo_trainer = DPOTrainer(
model,
ref_model,
args=training_args,
train_dataset=dataset,
tokenizer=tokenizer,
peft_config=peft_config,
beta=0.1,
force_use_ref_model=True
)
```
|
Mag0g/Ezekiel25_16 | Mag0g | 2024-05-12T12:32:10Z | 124 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-12T09:24:17Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
euiyulsong/Mistral-7B-SFT-DPO2 | euiyulsong | 2024-05-12T12:31:06Z | 77 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-05-12T12:29:11Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
HeavenWaters/llmProj1ESameer | HeavenWaters | 2024-05-12T12:23:23Z | 140 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-12T12:23:00Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Emar2708/LunarLander-v2-intro-project | Emar2708 | 2024-05-12T12:21:13Z | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2024-05-12T12:20:52Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 274.48 +/- 9.75
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
shkna1368/umt5-small-finetuned-umt5-poemV1 | shkna1368 | 2024-05-12T12:10:09Z | 99 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"umt5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/umt5-small",
"base_model:finetune:google/umt5-small",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-05-12T11:52:27Z | ---
license: apache-2.0
base_model: google/umt5-small
tags:
- generated_from_trainer
model-index:
- name: umt5-small-finetuned-umt5-poemV1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# umt5-small-finetuned-umt5-poemV1
This model is a fine-tuned version of [google/umt5-small](https://huggingface.co/google/umt5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 282 | nan |
| 0.0 | 2.0 | 564 | nan |
| 0.0 | 3.0 | 846 | nan |
| 0.0 | 4.0 | 1128 | nan |
| 0.0 | 5.0 | 1410 | nan |
| 0.0 | 6.0 | 1692 | nan |
| 0.0 | 7.0 | 1974 | nan |
| 0.0 | 8.0 | 2256 | nan |
| 0.0 | 9.0 | 2538 | nan |
| 0.0 | 10.0 | 2820 | nan |
### Framework versions
- Transformers 4.40.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
chrlu/zephyr-7b-gemma-policy_focused_loss | chrlu | 2024-05-12T12:08:35Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"alignment-handbook",
"generated_from_trainer",
"conversational",
"dataset:argilla/dpo-mix-7k",
"base_model:HuggingFaceH4/zephyr-7b-gemma-sft-v0.1",
"base_model:finetune:HuggingFaceH4/zephyr-7b-gemma-sft-v0.1",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-12T12:03:49Z | ---
license: other
base_model: HuggingFaceH4/zephyr-7b-gemma-sft-v0.1
tags:
- alignment-handbook
- generated_from_trainer
datasets:
- argilla/dpo-mix-7k
model-index:
- name: zephyr-7b-gemma-policy_focused_loss
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# zephyr-7b-gemma-policy_focused_loss
This model is a fine-tuned version of [HuggingFaceH4/zephyr-7b-gemma-sft-v0.1](https://huggingface.co/HuggingFaceH4/zephyr-7b-gemma-sft-v0.1) on the argilla/dpo-mix-7k dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 2
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 8
- total_train_batch_size: 128
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.40.1
- Pytorch 2.1.2+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
|
moetezsa/first_train_2NDTRY | moetezsa | 2024-05-12T12:08:22Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"unsloth",
"generated_from_trainer",
"base_model:unsloth/mistral-7b-bnb-4bit",
"base_model:adapter:unsloth/mistral-7b-bnb-4bit",
"license:apache-2.0",
"region:us"
] | null | 2024-05-12T10:04:32Z | ---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- unsloth
- generated_from_trainer
base_model: unsloth/mistral-7b-bnb-4bit
model-index:
- name: first_train_2NDTRY
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# first_train_2NDTRY
This model is a fine-tuned version of [unsloth/mistral-7b-bnb-4bit](https://huggingface.co/unsloth/mistral-7b-bnb-4bit) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 3407
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 5
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.39.3
- Pytorch 2.2.2+cu121
- Datasets 2.16.0
- Tokenizers 0.15.2 |
chrlu/zephyr-7b-gemma-combined_exp_logistic_loss | chrlu | 2024-05-12T12:01:36Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"alignment-handbook",
"generated_from_trainer",
"conversational",
"dataset:argilla/dpo-mix-7k",
"base_model:HuggingFaceH4/zephyr-7b-gemma-sft-v0.1",
"base_model:finetune:HuggingFaceH4/zephyr-7b-gemma-sft-v0.1",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-12T11:57:00Z | ---
license: other
base_model: HuggingFaceH4/zephyr-7b-gemma-sft-v0.1
tags:
- alignment-handbook
- generated_from_trainer
datasets:
- argilla/dpo-mix-7k
model-index:
- name: zephyr-7b-gemma-combined_exp_logistic_loss
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# zephyr-7b-gemma-combined_exp_logistic_loss
This model is a fine-tuned version of [HuggingFaceH4/zephyr-7b-gemma-sft-v0.1](https://huggingface.co/HuggingFaceH4/zephyr-7b-gemma-sft-v0.1) on the argilla/dpo-mix-7k dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 2
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 8
- total_train_batch_size: 128
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.40.1
- Pytorch 2.1.2+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
|
Subsets and Splits