Search is not available for this dataset
pipeline_tag
stringclasses 48
values | library_name
stringclasses 205
values | text
stringlengths 0
18.3M
| metadata
stringlengths 2
1.07B
| id
stringlengths 5
122
| last_modified
null | tags
sequencelengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
|
---|---|---|---|---|---|---|---|---|
null | null | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Boreas-7B - GGUF
- Model creator: https://huggingface.co/yhavinga/
- Original model: https://huggingface.co/yhavinga/Boreas-7B/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Boreas-7B.Q2_K.gguf](https://huggingface.co/RichardErkhov/yhavinga_-_Boreas-7B-gguf/blob/main/Boreas-7B.Q2_K.gguf) | Q2_K | 2.53GB |
| [Boreas-7B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/yhavinga_-_Boreas-7B-gguf/blob/main/Boreas-7B.IQ3_XS.gguf) | IQ3_XS | 2.81GB |
| [Boreas-7B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/yhavinga_-_Boreas-7B-gguf/blob/main/Boreas-7B.IQ3_S.gguf) | IQ3_S | 2.96GB |
| [Boreas-7B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/yhavinga_-_Boreas-7B-gguf/blob/main/Boreas-7B.Q3_K_S.gguf) | Q3_K_S | 2.95GB |
| [Boreas-7B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/yhavinga_-_Boreas-7B-gguf/blob/main/Boreas-7B.IQ3_M.gguf) | IQ3_M | 3.06GB |
| [Boreas-7B.Q3_K.gguf](https://huggingface.co/RichardErkhov/yhavinga_-_Boreas-7B-gguf/blob/main/Boreas-7B.Q3_K.gguf) | Q3_K | 3.28GB |
| [Boreas-7B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/yhavinga_-_Boreas-7B-gguf/blob/main/Boreas-7B.Q3_K_M.gguf) | Q3_K_M | 3.28GB |
| [Boreas-7B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/yhavinga_-_Boreas-7B-gguf/blob/main/Boreas-7B.Q3_K_L.gguf) | Q3_K_L | 3.56GB |
| [Boreas-7B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/yhavinga_-_Boreas-7B-gguf/blob/main/Boreas-7B.IQ4_XS.gguf) | IQ4_XS | 3.67GB |
| [Boreas-7B.Q4_0.gguf](https://huggingface.co/RichardErkhov/yhavinga_-_Boreas-7B-gguf/blob/main/Boreas-7B.Q4_0.gguf) | Q4_0 | 3.83GB |
| [Boreas-7B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/yhavinga_-_Boreas-7B-gguf/blob/main/Boreas-7B.IQ4_NL.gguf) | IQ4_NL | 3.87GB |
| [Boreas-7B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/yhavinga_-_Boreas-7B-gguf/blob/main/Boreas-7B.Q4_K_S.gguf) | Q4_K_S | 3.86GB |
| [Boreas-7B.Q4_K.gguf](https://huggingface.co/RichardErkhov/yhavinga_-_Boreas-7B-gguf/blob/main/Boreas-7B.Q4_K.gguf) | Q4_K | 4.07GB |
| [Boreas-7B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/yhavinga_-_Boreas-7B-gguf/blob/main/Boreas-7B.Q4_K_M.gguf) | Q4_K_M | 4.07GB |
| [Boreas-7B.Q4_1.gguf](https://huggingface.co/RichardErkhov/yhavinga_-_Boreas-7B-gguf/blob/main/Boreas-7B.Q4_1.gguf) | Q4_1 | 4.24GB |
| [Boreas-7B.Q5_0.gguf](https://huggingface.co/RichardErkhov/yhavinga_-_Boreas-7B-gguf/blob/main/Boreas-7B.Q5_0.gguf) | Q5_0 | 4.65GB |
| [Boreas-7B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/yhavinga_-_Boreas-7B-gguf/blob/main/Boreas-7B.Q5_K_S.gguf) | Q5_K_S | 4.65GB |
| [Boreas-7B.Q5_K.gguf](https://huggingface.co/RichardErkhov/yhavinga_-_Boreas-7B-gguf/blob/main/Boreas-7B.Q5_K.gguf) | Q5_K | 4.78GB |
| [Boreas-7B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/yhavinga_-_Boreas-7B-gguf/blob/main/Boreas-7B.Q5_K_M.gguf) | Q5_K_M | 4.78GB |
| [Boreas-7B.Q5_1.gguf](https://huggingface.co/RichardErkhov/yhavinga_-_Boreas-7B-gguf/blob/main/Boreas-7B.Q5_1.gguf) | Q5_1 | 5.07GB |
| [Boreas-7B.Q6_K.gguf](https://huggingface.co/RichardErkhov/yhavinga_-_Boreas-7B-gguf/blob/main/Boreas-7B.Q6_K.gguf) | Q6_K | 5.53GB |
Original model description:
---
library_name: transformers
tags: []
---
# Model Card for Boreas-7B
Base model of [Boreas-7B-chat](https://huggingface.co/yhavinga/Boreas-7B-chat)
For more info refer to the readme of the chat model.
| {} | RichardErkhov/yhavinga_-_Boreas-7B-gguf | null | [
"gguf",
"region:us"
] | null | 2024-05-01T15:17:01+00:00 |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | ai-maker-space/gen-z-translate-llama-3-instruct | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-01T15:17:54+00:00 |
text-classification | setfit |
# SetFit Aspect Model
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Aspect Based Sentiment Analysis (ABSA). A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification. In particular, this model is in charge of filtering aspect span candidates.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
This model was trained within the context of a larger system for ABSA, which looks like so:
1. Use a spaCy model to select possible aspect span candidates.
2. **Use this SetFit model to filter these possible aspect span candidates.**
3. Use a SetFit model to classify the filtered aspect span candidates.
## Model Details
### Model Description
- **Model Type:** SetFit
<!-- - **Sentence Transformer:** [Unknown](https://huggingface.co/unknown) -->
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **spaCy Model:** id_core_news_trf
- **SetFitABSA Aspect Model:** [pahri/setfit-indo-resto-RM-ibu-imas-aspect](https://huggingface.co/pahri/setfit-indo-resto-RM-ibu-imas-aspect)
- **SetFitABSA Polarity Model:** [pahri/setfit-indo-resto-RM-ibu-imas-polarity](https://huggingface.co/pahri/setfit-indo-resto-RM-ibu-imas-polarity)
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 2 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:----------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| no aspect | <ul><li>'ambel leuncanya:ambel leuncanya enak terus pedesss'</li><li>'Warung Sunda:Warung Sunda murah meriah dan makanannya enak. Favorit selada air krispi dan ayam bakar'</li><li>'makanannya:Warung Sunda murah meriah dan makanannya enak. Favorit selada air krispi dan ayam bakar'</li></ul> |
| aspect | <ul><li>'ayam bakar:Warung Sunda murah meriah dan makanannya enak. Favorit selada air krispi dan ayam bakar'</li><li>'Ayam bakar:Ayam bakar,sambel leunca sambel terasi merah enak banget 9/10, perkedel jagung 8/10 makan pakai sambel mantap. Makan berdua sekitar 77k'</li><li>'sambel terasi merah:Ayam bakar,sambel leunca sambel terasi merah enak banget 9/10, perkedel jagung 8/10 makan pakai sambel mantap. Makan berdua sekitar 77k'</li></ul> |
## Evaluation
### Metrics
| Label | Accuracy |
|:--------|:---------|
| **all** | 0.8063 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import AbsaModel
# Download from the 🤗 Hub
model = AbsaModel.from_pretrained(
"pahri/setfit-indo-resto-RM-ibu-imas-aspect",
"pahri/setfit-indo-resto-RM-ibu-imas-polarity",
)
# Run inference
preds = model("The food was great, but the venue is just way too busy.")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:--------|:----|
| Word count | 4 | 37.7180 | 93 |
| Label | Training Sample Count |
|:----------|:----------------------|
| no aspect | 371 |
| aspect | 51 |
### Training Hyperparameters
- batch_size: (6, 6)
- num_epochs: (1, 16)
- max_steps: -1
- sampling_strategy: oversampling
- body_learning_rate: (2e-05, 1e-05)
- head_learning_rate: 0.01
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: True
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:-----:|:-------------:|:---------------:|
| 0.0000 | 1 | 0.4225 | - |
| 0.0021 | 50 | 0.2528 | - |
| 0.0043 | 100 | 0.3611 | - |
| 0.0064 | 150 | 0.2989 | - |
| 0.0085 | 200 | 0.2907 | - |
| 0.0107 | 250 | 0.1609 | - |
| 0.0128 | 300 | 0.3534 | - |
| 0.0149 | 350 | 0.1294 | - |
| 0.0171 | 400 | 0.2797 | - |
| 0.0192 | 450 | 0.3119 | - |
| 0.0213 | 500 | 0.004 | - |
| 0.0235 | 550 | 0.1057 | - |
| 0.0256 | 600 | 0.1049 | - |
| 0.0277 | 650 | 0.1601 | - |
| 0.0299 | 700 | 0.151 | - |
| 0.0320 | 750 | 0.1034 | - |
| 0.0341 | 800 | 0.2356 | - |
| 0.0363 | 850 | 0.1335 | - |
| 0.0384 | 900 | 0.0559 | - |
| 0.0405 | 950 | 0.0028 | - |
| 0.0427 | 1000 | 0.1307 | - |
| 0.0448 | 1050 | 0.0049 | - |
| 0.0469 | 1100 | 0.1348 | - |
| 0.0491 | 1150 | 0.0392 | - |
| 0.0512 | 1200 | 0.054 | - |
| 0.0533 | 1250 | 0.0016 | - |
| 0.0555 | 1300 | 0.0012 | - |
| 0.0576 | 1350 | 0.0414 | - |
| 0.0597 | 1400 | 0.1087 | - |
| 0.0618 | 1450 | 0.0464 | - |
| 0.0640 | 1500 | 0.0095 | - |
| 0.0661 | 1550 | 0.0011 | - |
| 0.0682 | 1600 | 0.0002 | - |
| 0.0704 | 1650 | 0.1047 | - |
| 0.0725 | 1700 | 0.001 | - |
| 0.0746 | 1750 | 0.0965 | - |
| 0.0768 | 1800 | 0.0002 | - |
| 0.0789 | 1850 | 0.1436 | - |
| 0.0810 | 1900 | 0.0011 | - |
| 0.0832 | 1950 | 0.001 | - |
| 0.0853 | 2000 | 0.1765 | - |
| 0.0874 | 2050 | 0.1401 | - |
| 0.0896 | 2100 | 0.0199 | - |
| 0.0917 | 2150 | 0.0 | - |
| 0.0938 | 2200 | 0.0023 | - |
| 0.0960 | 2250 | 0.0034 | - |
| 0.0981 | 2300 | 0.0001 | - |
| 0.1002 | 2350 | 0.0948 | - |
| 0.1024 | 2400 | 0.1634 | - |
| 0.1045 | 2450 | 0.0 | - |
| 0.1066 | 2500 | 0.0005 | - |
| 0.1088 | 2550 | 0.0695 | - |
| 0.1109 | 2600 | 0.0 | - |
| 0.1130 | 2650 | 0.0067 | - |
| 0.1152 | 2700 | 0.0025 | - |
| 0.1173 | 2750 | 0.0013 | - |
| 0.1194 | 2800 | 0.1426 | - |
| 0.1216 | 2850 | 0.0001 | - |
| 0.1237 | 2900 | 0.0 | - |
| 0.1258 | 2950 | 0.0 | - |
| 0.1280 | 3000 | 0.0001 | - |
| 0.1301 | 3050 | 0.0001 | - |
| 0.1322 | 3100 | 0.0122 | - |
| 0.1344 | 3150 | 0.0 | - |
| 0.1365 | 3200 | 0.0001 | - |
| 0.1386 | 3250 | 0.0041 | - |
| 0.1408 | 3300 | 0.2549 | - |
| 0.1429 | 3350 | 0.0062 | - |
| 0.1450 | 3400 | 0.0154 | - |
| 0.1472 | 3450 | 0.1776 | - |
| 0.1493 | 3500 | 0.0039 | - |
| 0.1514 | 3550 | 0.0183 | - |
| 0.1536 | 3600 | 0.0045 | - |
| 0.1557 | 3650 | 0.1108 | - |
| 0.1578 | 3700 | 0.0002 | - |
| 0.1600 | 3750 | 0.01 | - |
| 0.1621 | 3800 | 0.0002 | - |
| 0.1642 | 3850 | 0.0001 | - |
| 0.1664 | 3900 | 0.1612 | - |
| 0.1685 | 3950 | 0.0107 | - |
| 0.1706 | 4000 | 0.0548 | - |
| 0.1728 | 4050 | 0.0001 | - |
| 0.1749 | 4100 | 0.0162 | - |
| 0.1770 | 4150 | 0.1294 | - |
| 0.1792 | 4200 | 0.0 | - |
| 0.1813 | 4250 | 0.0032 | - |
| 0.1834 | 4300 | 0.0051 | - |
| 0.1855 | 4350 | 0.0 | - |
| 0.1877 | 4400 | 0.0151 | - |
| 0.1898 | 4450 | 0.0097 | - |
| 0.1919 | 4500 | 0.0002 | - |
| 0.1941 | 4550 | 0.0045 | - |
| 0.1962 | 4600 | 0.0001 | - |
| 0.1983 | 4650 | 0.0001 | - |
| 0.2005 | 4700 | 0.0227 | - |
| 0.2026 | 4750 | 0.0018 | - |
| 0.2047 | 4800 | 0.0 | - |
| 0.2069 | 4850 | 0.0001 | - |
| 0.2090 | 4900 | 0.0 | - |
| 0.2111 | 4950 | 0.0 | - |
| 0.2133 | 5000 | 0.0 | - |
| 0.2154 | 5050 | 0.0002 | - |
| 0.2175 | 5100 | 0.0002 | - |
| 0.2197 | 5150 | 0.0038 | - |
| 0.2218 | 5200 | 0.0 | - |
| 0.2239 | 5250 | 0.0 | - |
| 0.2261 | 5300 | 0.0 | - |
| 0.2282 | 5350 | 0.0028 | - |
| 0.2303 | 5400 | 0.0 | - |
| 0.2325 | 5450 | 0.1146 | - |
| 0.2346 | 5500 | 0.0 | - |
| 0.2367 | 5550 | 0.0073 | - |
| 0.2389 | 5600 | 0.0467 | - |
| 0.2410 | 5650 | 0.0092 | - |
| 0.2431 | 5700 | 0.0196 | - |
| 0.2453 | 5750 | 0.0002 | - |
| 0.2474 | 5800 | 0.0043 | - |
| 0.2495 | 5850 | 0.0378 | - |
| 0.2517 | 5900 | 0.0049 | - |
| 0.2538 | 5950 | 0.0054 | - |
| 0.2559 | 6000 | 0.1757 | - |
| 0.2581 | 6050 | 0.0 | - |
| 0.2602 | 6100 | 0.0001 | - |
| 0.2623 | 6150 | 0.1327 | - |
| 0.2645 | 6200 | 0.0 | - |
| 0.2666 | 6250 | 0.0 | - |
| 0.2687 | 6300 | 0.0 | - |
| 0.2709 | 6350 | 0.0134 | - |
| 0.2730 | 6400 | 0.0001 | - |
| 0.2751 | 6450 | 0.0112 | - |
| 0.2773 | 6500 | 0.0864 | - |
| 0.2794 | 6550 | 0.0 | - |
| 0.2815 | 6600 | 0.0094 | - |
| 0.2837 | 6650 | 0.1358 | - |
| 0.2858 | 6700 | 0.0155 | - |
| 0.2879 | 6750 | 0.0025 | - |
| 0.2901 | 6800 | 0.0002 | - |
| 0.2922 | 6850 | 0.0001 | - |
| 0.2943 | 6900 | 0.2809 | - |
| 0.2965 | 6950 | 0.0 | - |
| 0.2986 | 7000 | 0.0242 | - |
| 0.3007 | 7050 | 0.0015 | - |
| 0.3028 | 7100 | 0.0 | - |
| 0.3050 | 7150 | 0.1064 | - |
| 0.3071 | 7200 | 0.1636 | - |
| 0.3092 | 7250 | 0.267 | - |
| 0.3114 | 7300 | 0.1656 | - |
| 0.3135 | 7350 | 0.0943 | - |
| 0.3156 | 7400 | 0.189 | - |
| 0.3178 | 7450 | 0.0055 | - |
| 0.3199 | 7500 | 0.1286 | - |
| 0.3220 | 7550 | 0.1062 | - |
| 0.3242 | 7600 | 0.1275 | - |
| 0.3263 | 7650 | 0.0101 | - |
| 0.3284 | 7700 | 0.0162 | - |
| 0.3306 | 7750 | 0.0001 | - |
| 0.3327 | 7800 | 0.0001 | - |
| 0.3348 | 7850 | 0.0003 | - |
| 0.3370 | 7900 | 0.0 | - |
| 0.3391 | 7950 | 0.135 | - |
| 0.3412 | 8000 | 0.0 | - |
| 0.3434 | 8050 | 0.0125 | - |
| 0.3455 | 8100 | 0.0004 | - |
| 0.3476 | 8150 | 0.0 | - |
| 0.3498 | 8200 | 0.2229 | - |
| 0.3519 | 8250 | 0.0 | - |
| 0.3540 | 8300 | 0.0051 | - |
| 0.3562 | 8350 | 0.0 | - |
| 0.3583 | 8400 | 0.0001 | - |
| 0.3604 | 8450 | 0.0 | - |
| 0.3626 | 8500 | 0.1261 | - |
| 0.3647 | 8550 | 0.0054 | - |
| 0.3668 | 8600 | 0.1636 | - |
| 0.3690 | 8650 | 0.0036 | - |
| 0.3711 | 8700 | 0.0 | - |
| 0.3732 | 8750 | 0.0027 | - |
| 0.3754 | 8800 | 0.0 | - |
| 0.3775 | 8850 | 0.1422 | - |
| 0.3796 | 8900 | 0.1314 | - |
| 0.3818 | 8950 | 0.003 | - |
| 0.3839 | 9000 | 0.0 | - |
| 0.3860 | 9050 | 0.0092 | - |
| 0.3882 | 9100 | 0.0129 | - |
| 0.3903 | 9150 | 0.0 | - |
| 0.3924 | 9200 | 0.0 | - |
| 0.3946 | 9250 | 0.1659 | - |
| 0.3967 | 9300 | 0.0 | - |
| 0.3988 | 9350 | 0.0 | - |
| 0.4010 | 9400 | 0.0085 | - |
| 0.4031 | 9450 | 0.0 | - |
| 0.4052 | 9500 | 0.0 | - |
| 0.4074 | 9550 | 0.0 | - |
| 0.4095 | 9600 | 0.0112 | - |
| 0.4116 | 9650 | 0.0 | - |
| 0.4138 | 9700 | 0.0154 | - |
| 0.4159 | 9750 | 0.0011 | - |
| 0.4180 | 9800 | 0.0077 | - |
| 0.4202 | 9850 | 0.0064 | - |
| 0.4223 | 9900 | 0.0 | - |
| 0.4244 | 9950 | 0.0 | - |
| 0.4265 | 10000 | 0.0121 | - |
| 0.4287 | 10050 | 0.0 | - |
| 0.4308 | 10100 | 0.0 | - |
| 0.4329 | 10150 | 0.0076 | - |
| 0.4351 | 10200 | 0.0039 | - |
| 0.4372 | 10250 | 0.2153 | - |
| 0.4393 | 10300 | 0.0 | - |
| 0.4415 | 10350 | 0.1218 | - |
| 0.4436 | 10400 | 0.0077 | - |
| 0.4457 | 10450 | 0.1311 | - |
| 0.4479 | 10500 | 0.0 | - |
| 0.4500 | 10550 | 0.0 | - |
| 0.4521 | 10600 | 0.0 | - |
| 0.4543 | 10650 | 0.0041 | - |
| 0.4564 | 10700 | 0.0073 | - |
| 0.4585 | 10750 | 0.0051 | - |
| 0.4607 | 10800 | 0.0 | - |
| 0.4628 | 10850 | 0.0 | - |
| 0.4649 | 10900 | 0.0 | - |
| 0.4671 | 10950 | 0.0001 | - |
| 0.4692 | 11000 | 0.0 | - |
| 0.4713 | 11050 | 0.1696 | - |
| 0.4735 | 11100 | 0.0 | - |
| 0.4756 | 11150 | 0.1243 | - |
| 0.4777 | 11200 | 0.0 | - |
| 0.4799 | 11250 | 0.0 | - |
| 0.4820 | 11300 | 0.0003 | - |
| 0.4841 | 11350 | 0.0707 | - |
| 0.4863 | 11400 | 0.166 | - |
| 0.4884 | 11450 | 0.4964 | - |
| 0.4905 | 11500 | 0.0023 | - |
| 0.4927 | 11550 | 0.0 | - |
| 0.4948 | 11600 | 0.0 | - |
| 0.4969 | 11650 | 0.173 | - |
| 0.4991 | 11700 | 0.0 | - |
| 0.5012 | 11750 | 0.0004 | - |
| 0.5033 | 11800 | 0.0 | - |
| 0.5055 | 11850 | 0.125 | - |
| 0.5076 | 11900 | 0.0042 | - |
| 0.5097 | 11950 | 0.012 | - |
| 0.5119 | 12000 | 0.0046 | - |
| 0.5140 | 12050 | 0.0001 | - |
| 0.5161 | 12100 | 0.0062 | - |
| 0.5183 | 12150 | 0.0 | - |
| 0.5204 | 12200 | 0.017 | - |
| 0.5225 | 12250 | 0.2668 | - |
| 0.5247 | 12300 | 0.0986 | - |
| 0.5268 | 12350 | 0.0071 | - |
| 0.5289 | 12400 | 0.0055 | - |
| 0.5311 | 12450 | 0.006 | - |
| 0.5332 | 12500 | 0.0057 | - |
| 0.5353 | 12550 | 0.0044 | - |
| 0.5375 | 12600 | 0.0039 | - |
| 0.5396 | 12650 | 0.1685 | - |
| 0.5417 | 12700 | 0.125 | - |
| 0.5438 | 12750 | 0.0026 | - |
| 0.5460 | 12800 | 0.0 | - |
| 0.5481 | 12850 | 0.0 | - |
| 0.5502 | 12900 | 0.1024 | - |
| 0.5524 | 12950 | 0.0 | - |
| 0.5545 | 13000 | 0.0 | - |
| 0.5566 | 13050 | 0.0083 | - |
| 0.5588 | 13100 | 0.0 | - |
| 0.5609 | 13150 | 0.0001 | - |
| 0.5630 | 13200 | 0.0 | - |
| 0.5652 | 13250 | 0.095 | - |
| 0.5673 | 13300 | 0.0001 | - |
| 0.5694 | 13350 | 0.0026 | - |
| 0.5716 | 13400 | 0.0 | - |
| 0.5737 | 13450 | 0.0041 | - |
| 0.5758 | 13500 | 0.1654 | - |
| 0.5780 | 13550 | 0.0003 | - |
| 0.5801 | 13600 | 0.0056 | - |
| 0.5822 | 13650 | 0.0 | - |
| 0.5844 | 13700 | 0.1012 | - |
| 0.5865 | 13750 | 0.0 | - |
| 0.5886 | 13800 | 0.0001 | - |
| 0.5908 | 13850 | 0.0042 | - |
| 0.5929 | 13900 | 0.0122 | - |
| 0.5950 | 13950 | 0.1047 | - |
| 0.5972 | 14000 | 0.0 | - |
| 0.5993 | 14050 | 0.0121 | - |
| 0.6014 | 14100 | 0.0 | - |
| 0.6036 | 14150 | 0.0 | - |
| 0.6057 | 14200 | 0.0 | - |
| 0.6078 | 14250 | 0.0105 | - |
| 0.6100 | 14300 | 0.0 | - |
| 0.6121 | 14350 | 0.011 | - |
| 0.6142 | 14400 | 0.0329 | - |
| 0.6164 | 14450 | 0.0942 | - |
| 0.6185 | 14500 | 0.0173 | - |
| 0.6206 | 14550 | 0.0 | - |
| 0.6228 | 14600 | 0.1032 | - |
| 0.6249 | 14650 | 0.016 | - |
| 0.6270 | 14700 | 0.0079 | - |
| 0.6292 | 14750 | 0.0 | - |
| 0.6313 | 14800 | 0.1088 | - |
| 0.6334 | 14850 | 0.0091 | - |
| 0.6356 | 14900 | 0.0039 | - |
| 0.6377 | 14950 | 0.0 | - |
| 0.6398 | 15000 | 0.0 | - |
| 0.6420 | 15050 | 0.0 | - |
| 0.6441 | 15100 | 0.1654 | - |
| 0.6462 | 15150 | 0.0 | - |
| 0.6484 | 15200 | 0.0002 | - |
| 0.6505 | 15250 | 0.0 | - |
| 0.6526 | 15300 | 0.1745 | - |
| 0.6548 | 15350 | 0.0 | - |
| 0.6569 | 15400 | 0.156 | - |
| 0.6590 | 15450 | 0.0 | - |
| 0.6611 | 15500 | 0.0 | - |
| 0.6633 | 15550 | 0.1755 | - |
| 0.6654 | 15600 | 0.008 | - |
| 0.6675 | 15650 | 0.0 | - |
| 0.6697 | 15700 | 0.0 | - |
| 0.6718 | 15750 | 0.0041 | - |
| 0.6739 | 15800 | 0.0037 | - |
| 0.6761 | 15850 | 0.0 | - |
| 0.6782 | 15900 | 0.0 | - |
| 0.6803 | 15950 | 0.0092 | - |
| 0.6825 | 16000 | 0.0071 | - |
| 0.6846 | 16050 | 0.0053 | - |
| 0.6867 | 16100 | 0.0 | - |
| 0.6889 | 16150 | 0.004 | - |
| 0.6910 | 16200 | 0.0036 | - |
| 0.6931 | 16250 | 0.0 | - |
| 0.6953 | 16300 | 0.0 | - |
| 0.6974 | 16350 | 0.184 | - |
| 0.6995 | 16400 | 0.0 | - |
| 0.7017 | 16450 | 0.0133 | - |
| 0.7038 | 16500 | 0.0 | - |
| 0.7059 | 16550 | 0.174 | - |
| 0.7081 | 16600 | 0.0 | - |
| 0.7102 | 16650 | 0.0233 | - |
| 0.7123 | 16700 | 0.0117 | - |
| 0.7145 | 16750 | 0.0272 | - |
| 0.7166 | 16800 | 0.0095 | - |
| 0.7187 | 16850 | 0.0 | - |
| 0.7209 | 16900 | 0.1656 | - |
| 0.7230 | 16950 | 0.0055 | - |
| 0.7251 | 17000 | 0.0 | - |
| 0.7273 | 17050 | 0.1716 | - |
| 0.7294 | 17100 | 0.0 | - |
| 0.7315 | 17150 | 0.0 | - |
| 0.7337 | 17200 | 0.1035 | - |
| 0.7358 | 17250 | 0.0694 | - |
| 0.7379 | 17300 | 0.1733 | - |
| 0.7401 | 17350 | 0.0092 | - |
| 0.7422 | 17400 | 0.1656 | - |
| 0.7443 | 17450 | 0.0 | - |
| 0.7465 | 17500 | 0.1655 | - |
| 0.7486 | 17550 | 0.0059 | - |
| 0.7507 | 17600 | 0.1116 | - |
| 0.7529 | 17650 | 0.0 | - |
| 0.7550 | 17700 | 0.0068 | - |
| 0.7571 | 17750 | 0.0053 | - |
| 0.7593 | 17800 | 0.0 | - |
| 0.7614 | 17850 | 0.0062 | - |
| 0.7635 | 17900 | 0.0104 | - |
| 0.7657 | 17950 | 0.1727 | - |
| 0.7678 | 18000 | 0.0 | - |
| 0.7699 | 18050 | 0.0 | - |
| 0.7721 | 18100 | 0.0 | - |
| 0.7742 | 18150 | 0.0714 | - |
| 0.7763 | 18200 | 0.0 | - |
| 0.7785 | 18250 | 0.0 | - |
| 0.7806 | 18300 | 0.0002 | - |
| 0.7827 | 18350 | 0.0 | - |
| 0.7848 | 18400 | 0.0 | - |
| 0.7870 | 18450 | 0.0996 | - |
| 0.7891 | 18500 | 0.0 | - |
| 0.7912 | 18550 | 0.0 | - |
| 0.7934 | 18600 | 0.0139 | - |
| 0.7955 | 18650 | 0.0 | - |
| 0.7976 | 18700 | 0.1701 | - |
| 0.7998 | 18750 | 0.0 | - |
| 0.8019 | 18800 | 0.0001 | - |
| 0.8040 | 18850 | 0.0 | - |
| 0.8062 | 18900 | 0.0 | - |
| 0.8083 | 18950 | 0.0 | - |
| 0.8104 | 19000 | 0.0 | - |
| 0.8126 | 19050 | 0.0 | - |
| 0.8147 | 19100 | 0.1093 | - |
| 0.8168 | 19150 | 0.0 | - |
| 0.8190 | 19200 | 0.0 | - |
| 0.8211 | 19250 | 0.0075 | - |
| 0.8232 | 19300 | 0.1079 | - |
| 0.8254 | 19350 | 0.0112 | - |
| 0.8275 | 19400 | 0.1655 | - |
| 0.8296 | 19450 | 0.0152 | - |
| 0.8318 | 19500 | 0.1152 | - |
| 0.8339 | 19550 | 0.0 | - |
| 0.8360 | 19600 | 0.0 | - |
| 0.8382 | 19650 | 0.0079 | - |
| 0.8403 | 19700 | 0.0 | - |
| 0.8424 | 19750 | 0.0 | - |
| 0.8446 | 19800 | 0.0 | - |
| 0.8467 | 19850 | 0.0 | - |
| 0.8488 | 19900 | 0.1161 | - |
| 0.8510 | 19950 | 0.0057 | - |
| 0.8531 | 20000 | 0.0 | - |
| 0.8552 | 20050 | 0.0046 | - |
| 0.8574 | 20100 | 0.0 | - |
| 0.8595 | 20150 | 0.0068 | - |
| 0.8616 | 20200 | 0.0 | - |
| 0.8638 | 20250 | 0.0 | - |
| 0.8659 | 20300 | 0.0 | - |
| 0.8680 | 20350 | 0.0 | - |
| 0.8702 | 20400 | 0.0141 | - |
| 0.8723 | 20450 | 0.0001 | - |
| 0.8744 | 20500 | 0.0 | - |
| 0.8766 | 20550 | 0.0 | - |
| 0.8787 | 20600 | 0.0171 | - |
| 0.8808 | 20650 | 0.0 | - |
| 0.8830 | 20700 | 0.0 | - |
| 0.8851 | 20750 | 0.0077 | - |
| 0.8872 | 20800 | 0.0 | - |
| 0.8894 | 20850 | 0.0 | - |
| 0.8915 | 20900 | 0.0 | - |
| 0.8936 | 20950 | 0.0 | - |
| 0.8958 | 21000 | 0.0 | - |
| 0.8979 | 21050 | 0.0 | - |
| 0.9000 | 21100 | 0.0 | - |
| 0.9021 | 21150 | 0.0 | - |
| 0.9043 | 21200 | 0.0 | - |
| 0.9064 | 21250 | 0.1048 | - |
| 0.9085 | 21300 | 0.006 | - |
| 0.9107 | 21350 | 0.0 | - |
| 0.9128 | 21400 | 0.0 | - |
| 0.9149 | 21450 | 0.005 | - |
| 0.9171 | 21500 | 0.0 | - |
| 0.9192 | 21550 | 0.0325 | - |
| 0.9213 | 21600 | 0.0136 | - |
| 0.9235 | 21650 | 0.0 | - |
| 0.9256 | 21700 | 0.0062 | - |
| 0.9277 | 21750 | 0.1656 | - |
| 0.9299 | 21800 | 0.1648 | - |
| 0.9320 | 21850 | 0.0 | - |
| 0.9341 | 21900 | 0.0 | - |
| 0.9363 | 21950 | 0.0 | - |
| 0.9384 | 22000 | 0.2844 | - |
| 0.9405 | 22050 | 0.0 | - |
| 0.9427 | 22100 | 0.0 | - |
| 0.9448 | 22150 | 0.0 | - |
| 0.9469 | 22200 | 0.0 | - |
| 0.9491 | 22250 | 0.0 | - |
| 0.9512 | 22300 | 0.2096 | - |
| 0.9533 | 22350 | 0.0073 | - |
| 0.9555 | 22400 | 0.006 | - |
| 0.9576 | 22450 | 0.0 | - |
| 0.9597 | 22500 | 0.0079 | - |
| 0.9619 | 22550 | 0.0071 | - |
| 0.9640 | 22600 | 0.0 | - |
| 0.9661 | 22650 | 0.006 | - |
| 0.9683 | 22700 | 0.1048 | - |
| 0.9704 | 22750 | 0.007 | - |
| 0.9725 | 22800 | 0.0 | - |
| 0.9747 | 22850 | 0.0 | - |
| 0.9768 | 22900 | 0.007 | - |
| 0.9789 | 22950 | 0.0 | - |
| 0.9811 | 23000 | 0.1049 | - |
| 0.9832 | 23050 | 0.0069 | - |
| 0.9853 | 23100 | 0.0 | - |
| 0.9875 | 23150 | 0.0 | - |
| 0.9896 | 23200 | 0.0 | - |
| 0.9917 | 23250 | 0.0 | - |
| 0.9939 | 23300 | 0.007 | - |
| 0.9960 | 23350 | 0.0147 | - |
| 0.9981 | 23400 | 0.0 | - |
### Framework Versions
- Python: 3.10.13
- SetFit: 1.0.3
- Sentence Transformers: 2.7.0
- spaCy: 3.7.4
- Transformers: 4.36.2
- PyTorch: 2.1.2
- Datasets: 2.18.0
- Tokenizers: 0.15.2
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> | {"library_name": "setfit", "tags": ["setfit", "absa", "sentence-transformers", "text-classification", "generated_from_setfit_trainer"], "metrics": ["accuracy"], "widget": [{"text": "Suasana:Tempatnya ramai sekali dan ngantei banget. Suasana di dalam resto sangat panas dan padat. Makanannya enak enak."}, {"text": "bener2 pedes puolll:Rasanya sgt gak cocok dilidah gue orang bekasi.. ayamnya ayam kampung sih tp kecil bgt (beli yg dada).. terus tempe bacem sgt padet dan tahunya enak sih.. untuk sambel pedes bgt bener2 pedes puolll, tp rasanya gasukaa."}, {"text": "gang:Suasana di dalam resto sangat panas dan padat. Makanannya enak enak. Dan restonya ada di beberapa tempat dalam satu gang."}, {"text": "tempe:Menu makanannya khas Sunda ada ayam, pepes ikan, babat, tahu, tempe, sayur-sayur. Tidak banyak variasinya tapi kualitas rasanya oke. Saat itu pesen ayam bakar, jukut goreng, tempe sama pepes tahu. Ini semuanya enak (menurut pendapat pribadi)."}, {"text": "babat:Kemaren kebetulan makan babat sama nyobain cumi, buat tekstur babatnya itu engga alot sama sekali dan tidak amis, sedangkan buat cumi utuh lumayan gede juga tekstur kenyel kenyelnya dapet dan mateng juga sampe ke dalem. "}], "pipeline_tag": "text-classification", "inference": false, "model-index": [{"name": "SetFit Aspect Model", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "Unknown", "type": "unknown", "split": "test"}, "metrics": [{"type": "accuracy", "value": 0.80625, "name": "Accuracy"}]}]}]} | pahri/setfit-indo-resto-RM-ibu-imas-aspect | null | [
"setfit",
"safetensors",
"bert",
"absa",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"model-index",
"region:us"
] | null | 2024-05-01T15:18:12+00:00 |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Finetune-test4
This model is a fine-tuned version of [TheBloke/Mistral-7B-Instruct-v0.2-GPTQ](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.2-GPTQ) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1223
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-------:|:----:|:---------------:|
| 0.767 | 0.9956 | 56 | 0.5333 |
| 0.4313 | 1.9911 | 112 | 0.4449 |
| 0.3107 | 2.9867 | 168 | 0.4640 |
| 0.2198 | 4.0 | 225 | 0.5196 |
| 0.1633 | 4.9956 | 281 | 0.5811 |
| 0.1209 | 5.9911 | 337 | 0.6468 |
| 0.0944 | 6.9867 | 393 | 0.6891 |
| 0.0745 | 8.0 | 450 | 0.7297 |
| 0.064 | 8.9956 | 506 | 0.7844 |
| 0.0557 | 9.9911 | 562 | 0.8384 |
| 0.0489 | 10.9867 | 618 | 0.8632 |
| 0.0433 | 12.0 | 675 | 0.9223 |
| 0.0413 | 12.9956 | 731 | 0.9526 |
| 0.0389 | 13.9911 | 787 | 0.9552 |
| 0.0375 | 14.9867 | 843 | 1.0303 |
| 0.0355 | 16.0 | 900 | 1.0489 |
| 0.0355 | 16.9956 | 956 | 1.0804 |
| 0.0347 | 17.9911 | 1012 | 1.0983 |
| 0.0341 | 18.9867 | 1068 | 1.1147 |
| 0.0328 | 19.9111 | 1120 | 1.1223 |
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.1
- Pytorch 2.0.1+cu118
- Datasets 2.19.0
- Tokenizers 0.19.1 | {"license": "apache-2.0", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "TheBloke/Mistral-7B-Instruct-v0.2-GPTQ", "model-index": [{"name": "Finetune-test4", "results": []}]} | AmaanUsmani/Finetune-test4 | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:TheBloke/Mistral-7B-Instruct-v0.2-GPTQ",
"license:apache-2.0",
"region:us"
] | null | 2024-05-01T15:20:37+00:00 |
image-to-image | diffusers | {} | GraydientPlatformAPI/clarity3-inpainting | null | [
"diffusers",
"safetensors",
"diffusers:StableDiffusionInpaintPipeline",
"region:us"
] | null | 2024-05-01T15:21:19+00:00 |
|
null | null | {} | Bustinza/mi-super-modelo | null | [
"region:us"
] | null | 2024-05-01T15:22:48+00:00 |
|
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | hams2/split1 | null | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-01T15:24:07+00:00 |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | hams2/split2 | null | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-01T15:24:45+00:00 |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | hams2/giveup | null | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-01T15:25:06+00:00 |
text-classification | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | hams2/bertclass | null | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-01T15:25:43+00:00 |
null | transformers | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/xxx777xxxASD/Chaotic-Soliloquy-4x8B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Chaotic-Soliloquy-4x8B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Chaotic-Soliloquy-4x8B-GGUF/resolve/main/Chaotic-Soliloquy-4x8B.Q2_K.gguf) | Q2_K | 9.4 | |
| [GGUF](https://huggingface.co/mradermacher/Chaotic-Soliloquy-4x8B-GGUF/resolve/main/Chaotic-Soliloquy-4x8B.IQ3_XS.gguf) | IQ3_XS | 10.5 | |
| [GGUF](https://huggingface.co/mradermacher/Chaotic-Soliloquy-4x8B-GGUF/resolve/main/Chaotic-Soliloquy-4x8B.Q3_K_S.gguf) | Q3_K_S | 11.0 | |
| [GGUF](https://huggingface.co/mradermacher/Chaotic-Soliloquy-4x8B-GGUF/resolve/main/Chaotic-Soliloquy-4x8B.IQ3_S.gguf) | IQ3_S | 11.1 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Chaotic-Soliloquy-4x8B-GGUF/resolve/main/Chaotic-Soliloquy-4x8B.IQ3_M.gguf) | IQ3_M | 11.2 | |
| [GGUF](https://huggingface.co/mradermacher/Chaotic-Soliloquy-4x8B-GGUF/resolve/main/Chaotic-Soliloquy-4x8B.Q3_K_M.gguf) | Q3_K_M | 12.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Chaotic-Soliloquy-4x8B-GGUF/resolve/main/Chaotic-Soliloquy-4x8B.Q3_K_L.gguf) | Q3_K_L | 13.1 | |
| [GGUF](https://huggingface.co/mradermacher/Chaotic-Soliloquy-4x8B-GGUF/resolve/main/Chaotic-Soliloquy-4x8B.IQ4_XS.gguf) | IQ4_XS | 13.7 | |
| [GGUF](https://huggingface.co/mradermacher/Chaotic-Soliloquy-4x8B-GGUF/resolve/main/Chaotic-Soliloquy-4x8B.Q4_K_S.gguf) | Q4_K_S | 14.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Chaotic-Soliloquy-4x8B-GGUF/resolve/main/Chaotic-Soliloquy-4x8B.Q4_K_M.gguf) | Q4_K_M | 15.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Chaotic-Soliloquy-4x8B-GGUF/resolve/main/Chaotic-Soliloquy-4x8B.Q5_K_S.gguf) | Q5_K_S | 17.3 | |
| [GGUF](https://huggingface.co/mradermacher/Chaotic-Soliloquy-4x8B-GGUF/resolve/main/Chaotic-Soliloquy-4x8B.Q5_K_M.gguf) | Q5_K_M | 17.8 | |
| [GGUF](https://huggingface.co/mradermacher/Chaotic-Soliloquy-4x8B-GGUF/resolve/main/Chaotic-Soliloquy-4x8B.Q6_K.gguf) | Q6_K | 20.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Chaotic-Soliloquy-4x8B-GGUF/resolve/main/Chaotic-Soliloquy-4x8B.Q8_0.gguf) | Q8_0 | 26.6 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
| {"language": ["en"], "license": "llama3", "library_name": "transformers", "tags": ["moe"], "base_model": "xxx777xxxASD/Chaotic-Soliloquy-4x8B", "quantized_by": "mradermacher"} | mradermacher/Chaotic-Soliloquy-4x8B-GGUF | null | [
"transformers",
"gguf",
"moe",
"en",
"base_model:xxx777xxxASD/Chaotic-Soliloquy-4x8B",
"license:llama3",
"endpoints_compatible",
"region:us"
] | null | 2024-05-01T15:27:25+00:00 |
automatic-speech-recognition | transformers | {} | wh1tewhale/dysarthria-automatic-speech-recognition | null | [
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"endpoints_compatible",
"region:us"
] | null | 2024-05-01T15:27:39+00:00 |
|
text-generation | transformers | {} | noeloco/camel-lora-dpo-merged | null | [
"transformers",
"pytorch",
"llama",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-01T15:28:57+00:00 |
|
text-classification | transformers | {} | mynameissilasuibk/Hello-world | null | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-01T15:29:00+00:00 |
|
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | Owaner/CodexTokenizer6kOfficial | null | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-01T15:29:09+00:00 |
text2text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | Audino/my-awesome-modelv4-small | null | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-01T15:29:44+00:00 |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gen-z-translate-llama-3-instruct-v1
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1 | {"license": "other", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "datasets": ["generator"], "base_model": "meta-llama/Meta-Llama-3-8B-Instruct", "model-index": [{"name": "gen-z-translate-llama-3-instruct-v1", "results": []}]} | llm-wizard/gen-z-translate-llama-3-instruct-v1 | null | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"license:other",
"region:us"
] | null | 2024-05-01T15:32:09+00:00 |
null | null |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {} | blackninja19/normal-cancer | null | [
"arxiv:1910.09700",
"region:us"
] | null | 2024-05-01T15:33:23+00:00 |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | ai-maker-space/gen-z-translate-llama-3-instruct-v1 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-01T15:33:47+00:00 |
null | transformers | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
weighted/imatrix quants of https://huggingface.co/MarsupialAI/Aqueducts-18B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Aqueducts-18B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Aqueducts-18B-i1-GGUF/resolve/main/Aqueducts-18B.i1-IQ1_S.gguf) | i1-IQ1_S | 3.9 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Aqueducts-18B-i1-GGUF/resolve/main/Aqueducts-18B.i1-IQ1_M.gguf) | i1-IQ1_M | 4.3 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Aqueducts-18B-i1-GGUF/resolve/main/Aqueducts-18B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/Aqueducts-18B-i1-GGUF/resolve/main/Aqueducts-18B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Aqueducts-18B-i1-GGUF/resolve/main/Aqueducts-18B.i1-IQ2_S.gguf) | i1-IQ2_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Aqueducts-18B-i1-GGUF/resolve/main/Aqueducts-18B.i1-IQ2_M.gguf) | i1-IQ2_M | 6.1 | |
| [GGUF](https://huggingface.co/mradermacher/Aqueducts-18B-i1-GGUF/resolve/main/Aqueducts-18B.i1-Q2_K.gguf) | i1-Q2_K | 6.7 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Aqueducts-18B-i1-GGUF/resolve/main/Aqueducts-18B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 7.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Aqueducts-18B-i1-GGUF/resolve/main/Aqueducts-18B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 7.4 | |
| [GGUF](https://huggingface.co/mradermacher/Aqueducts-18B-i1-GGUF/resolve/main/Aqueducts-18B.i1-IQ3_S.gguf) | i1-IQ3_S | 7.9 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Aqueducts-18B-i1-GGUF/resolve/main/Aqueducts-18B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 7.9 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Aqueducts-18B-i1-GGUF/resolve/main/Aqueducts-18B.i1-IQ3_M.gguf) | i1-IQ3_M | 8.1 | |
| [GGUF](https://huggingface.co/mradermacher/Aqueducts-18B-i1-GGUF/resolve/main/Aqueducts-18B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 8.7 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Aqueducts-18B-i1-GGUF/resolve/main/Aqueducts-18B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 9.4 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Aqueducts-18B-i1-GGUF/resolve/main/Aqueducts-18B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 9.6 | |
| [GGUF](https://huggingface.co/mradermacher/Aqueducts-18B-i1-GGUF/resolve/main/Aqueducts-18B.i1-Q4_0.gguf) | i1-Q4_0 | 10.1 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Aqueducts-18B-i1-GGUF/resolve/main/Aqueducts-18B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 10.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Aqueducts-18B-i1-GGUF/resolve/main/Aqueducts-18B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 10.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Aqueducts-18B-i1-GGUF/resolve/main/Aqueducts-18B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 12.3 | |
| [GGUF](https://huggingface.co/mradermacher/Aqueducts-18B-i1-GGUF/resolve/main/Aqueducts-18B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 12.6 | |
| [GGUF](https://huggingface.co/mradermacher/Aqueducts-18B-i1-GGUF/resolve/main/Aqueducts-18B.i1-Q6_K.gguf) | i1-Q6_K | 14.6 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
| {"language": ["en"], "license": "cc-by-nc-4.0", "library_name": "transformers", "base_model": "MarsupialAI/Aqueducts-18B", "quantized_by": "mradermacher"} | mradermacher/Aqueducts-18B-i1-GGUF | null | [
"transformers",
"gguf",
"en",
"base_model:MarsupialAI/Aqueducts-18B",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-01T15:35:35+00:00 |
null | null | {"license": "openrail"} | Loren85/Michael-Jackson-young-JK5 | null | [
"license:openrail",
"region:us"
] | null | 2024-05-01T15:36:14+00:00 |
|
null | null | {} | l3ipp/fin_l3itc | null | [
"region:us"
] | null | 2024-05-01T15:36:31+00:00 |
|
automatic-speech-recognition | transformers | {} | raidavid/whisper-tiny-20240501 | null | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"endpoints_compatible",
"region:us"
] | null | 2024-05-01T15:36:34+00:00 |
|
null | null | {} | asude55/duygua7 | null | [
"region:us"
] | null | 2024-05-01T15:36:59+00:00 |
|
question-answering | null | {"language": ["fa"], "license": "llama2", "tags": ["music"], "datasets": ["HuggingFaceFW/fineweb"], "pipeline_tag": "question-answering"} | amirkalateh/ugutct | null | [
"music",
"question-answering",
"fa",
"dataset:HuggingFaceFW/fineweb",
"license:llama2",
"region:us"
] | null | 2024-05-01T15:37:47+00:00 |
|
null | null | {} | teleprint-me/mistral-7B-instruct-v0.2 | null | [
"gguf",
"region:us"
] | null | 2024-05-01T15:37:54+00:00 |
|
text-generation | transformers | This model is a version of Meta-Llama-3-8B that has been fine-tuned with Our In House CustomData.
Train Spec :
We utilized an A100x4 * 1 for training our model
with DeepSpeed / HuggingFace TRL Trainer / HuggingFace Accelerate | {"language": ["ko"], "license": "llama3", "datasets": ["Custom_datasets"], "pipeline_tag": "text-generation", "base_model": "meta-llama/Meta-Llama-3-8B"} | Alphacode-AI/Alphallama3-8B | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"ko",
"dataset:Custom_datasets",
"base_model:meta-llama/Meta-Llama-3-8B",
"license:llama3",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-01T15:38:49+00:00 |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | Sayan01/Phi-by2-Chat-T1 | null | [
"transformers",
"safetensors",
"phi",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-01T15:39:04+00:00 |
null | null | {"license": "openrail"} | Pedro1230987/Clarencio | null | [
"license:openrail",
"region:us"
] | null | 2024-05-01T15:39:37+00:00 |
|
image-classification | transformers | {} | Heem2/AI-vs-Real-Image-Detection | null | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-01T15:40:34+00:00 |
|
null | transformers | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/icefog72/IceLatteRP-7b
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/IceLatteRP-7b-GGUF/resolve/main/IceLatteRP-7b.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/IceLatteRP-7b-GGUF/resolve/main/IceLatteRP-7b.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/IceLatteRP-7b-GGUF/resolve/main/IceLatteRP-7b.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/IceLatteRP-7b-GGUF/resolve/main/IceLatteRP-7b.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/IceLatteRP-7b-GGUF/resolve/main/IceLatteRP-7b.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/IceLatteRP-7b-GGUF/resolve/main/IceLatteRP-7b.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/IceLatteRP-7b-GGUF/resolve/main/IceLatteRP-7b.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/IceLatteRP-7b-GGUF/resolve/main/IceLatteRP-7b.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/IceLatteRP-7b-GGUF/resolve/main/IceLatteRP-7b.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/IceLatteRP-7b-GGUF/resolve/main/IceLatteRP-7b.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/IceLatteRP-7b-GGUF/resolve/main/IceLatteRP-7b.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/IceLatteRP-7b-GGUF/resolve/main/IceLatteRP-7b.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/IceLatteRP-7b-GGUF/resolve/main/IceLatteRP-7b.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/IceLatteRP-7b-GGUF/resolve/main/IceLatteRP-7b.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/IceLatteRP-7b-GGUF/resolve/main/IceLatteRP-7b.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
| {"language": ["en"], "license": "cc-by-nc-4.0", "library_name": "transformers", "tags": ["mergekit", "merge", "alpaca", "mistral", "not-for-all-audiences", "nsfw"], "base_model": "icefog72/IceLatteRP-7b", "quantized_by": "mradermacher"} | mradermacher/IceLatteRP-7b-GGUF | null | [
"transformers",
"gguf",
"mergekit",
"merge",
"alpaca",
"mistral",
"not-for-all-audiences",
"nsfw",
"en",
"base_model:icefog72/IceLatteRP-7b",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-01T15:40:54+00:00 |
null | null | {} | Weblet/llama2-7b-hf-chat-lora-v3-turbo17145780973643503_mlabonne-guanaco-llama2-1k_train | null | [
"region:us"
] | null | 2024-05-01T15:42:01+00:00 |
|
text-classification | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | sreddy109/m3-test | null | [
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-01T15:42:16+00:00 |
image-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-student_six_classes
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0039
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 0.6323 | 0.9362 | 11 | 0.3212 | 0.8428 |
| 0.2352 | 1.9574 | 23 | 0.0927 | 0.9843 |
| 0.0961 | 2.9787 | 35 | 0.0497 | 0.9921 |
| 0.0685 | 4.0 | 47 | 0.0207 | 0.9969 |
| 0.0386 | 4.9362 | 58 | 0.0216 | 0.9969 |
| 0.0254 | 5.9574 | 70 | 0.0164 | 0.9969 |
| 0.0326 | 6.9787 | 82 | 0.0080 | 0.9969 |
| 0.0207 | 8.0 | 94 | 0.0057 | 0.9984 |
| 0.0233 | 8.9362 | 105 | 0.0042 | 1.0 |
| 0.0154 | 9.3617 | 110 | 0.0039 | 1.0 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["imagefolder"], "metrics": ["accuracy"], "base_model": "microsoft/swin-tiny-patch4-window7-224", "model-index": [{"name": "swin-tiny-patch4-window7-224-finetuned-student_six_classes", "results": [{"task": {"type": "image-classification", "name": "Image Classification"}, "dataset": {"name": "imagefolder", "type": "imagefolder", "config": "default", "split": "train", "args": "default"}, "metrics": [{"type": "accuracy", "value": 1.0, "name": "Accuracy"}]}]}]} | system-admin/swin-tiny-patch4-window7-224-finetuned-student_six_classes | null | [
"transformers",
"tensorboard",
"safetensors",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/swin-tiny-patch4-window7-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-01T15:42:29+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Poojithpoosa/newsclassification
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1349
- Validation Loss: 0.1256
- Train Accuracy: 0.9657
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-11, 'decay_steps': 150000, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.1341 | 0.1256 | 0.9657 | 0 |
| 0.1343 | 0.1256 | 0.9657 | 1 |
| 0.1349 | 0.1256 | 0.9657 | 2 |
### Framework versions
- Transformers 4.40.1
- TensorFlow 2.15.0
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "apache-2.0", "tags": ["generated_from_keras_callback"], "base_model": "distilbert/distilbert-base-uncased", "model-index": [{"name": "Poojithpoosa/newsclassification", "results": []}]} | Poojithpoosa/newsclassification | null | [
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"base_model:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-01T15:43:28+00:00 |
null | null | {} | GianPehn/PD_for_Anime | null | [
"region:us"
] | null | 2024-05-01T15:44:13+00:00 |
|
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | Weblet/llama-2-7b-chat-hf-turbo1714578275090292_mlabonne-guanaco-llama2-1k_train | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-01T15:46:00+00:00 |
null | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large_robust_stream_speaker_s2_18p19_cp2
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9784
- Wer: 0.2133
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 3
- gradient_accumulation_steps: 24
- total_train_batch_size: 288
- total_eval_batch_size: 24
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 6000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 0.0005 | 500.0 | 500 | 1.0049 | 0.1867 |
| 0.0002 | 1000.0 | 1000 | 0.9874 | 0.2 |
| 0.0001 | 1500.0 | 1500 | 1.0479 | 0.24 |
| 0.0001 | 2000.0 | 2000 | 0.9882 | 0.2133 |
| 0.0001 | 2500.0 | 2500 | 1.0299 | 0.2 |
| 0.0001 | 3000.0 | 3000 | 1.0099 | 0.2667 |
| 0.0001 | 3500.0 | 3500 | 1.0270 | 0.24 |
| 0.0001 | 4000.0 | 4000 | 1.0409 | 0.2133 |
| 0.0 | 4500.0 | 4500 | 0.9897 | 0.2133 |
| 0.0 | 5000.0 | 5000 | 0.9449 | 0.2133 |
| 0.0 | 5500.0 | 5500 | 0.9801 | 0.2133 |
| 0.0 | 6000.0 | 6000 | 0.9784 | 0.2133 |
### Framework versions
- Transformers 4.36.2
- Pytorch 1.13.1+cu117
- Datasets 2.16.1
- Tokenizers 0.15.0
| {"tags": ["generated_from_trainer"], "metrics": ["wer"], "model-index": [{"name": "wav2vec2-large_robust_stream_speaker_s2_18p19_cp2", "results": []}]} | apirbadian/wav2vec2-large_robust_stream_speaker_s2_18p19_cp2 | null | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"wav2vec2",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2024-05-01T15:46:38+00:00 |
null | null | {} | jayspring/distilbert-base-uncased-finetuned-imdb | null | [
"region:us"
] | null | 2024-05-01T15:47:35+00:00 |
|
null | null | {} | Maikel7/Human | null | [
"region:us"
] | null | 2024-05-01T15:47:46+00:00 |
|
null | null | {} | Gpent/onebreast | null | [
"region:us"
] | null | 2024-05-01T15:48:30+00:00 |
|
null | null | {} | dortch2001/Wiggles | null | [
"region:us"
] | null | 2024-05-01T15:48:41+00:00 |
|
null | null | {} | shynewsky/kainM-240501 | null | [
"region:us"
] | null | 2024-05-01T15:49:26+00:00 |
|
null | null | {"license": "openrail"} | itt0lp/oliviasourguts | null | [
"license:openrail",
"region:us"
] | null | 2024-05-01T15:49:40+00:00 |
|
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | slingshot/Meta-Llama-3-8B-Instruct-2024-05-01-13-21-08-conversation_model | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-01T15:49:53+00:00 |
text-to-audio | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# zlm-fil_b64_le5_s8000
This model is a fine-tuned version of [mikhail-panzo/zlm_b64_le4_s12000](https://huggingface.co/mikhail-panzo/zlm_b64_le4_s12000) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4118
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- training_steps: 8000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:--------:|:----:|:---------------:|
| 0.5529 | 22.2222 | 500 | 0.5000 |
| 0.4974 | 44.4444 | 1000 | 0.4557 |
| 0.4716 | 66.6667 | 1500 | 0.4359 |
| 0.453 | 88.8889 | 2000 | 0.4246 |
| 0.4428 | 111.1111 | 2500 | 0.4196 |
| 0.4332 | 133.3333 | 3000 | 0.4171 |
| 0.4246 | 155.5556 | 3500 | 0.4154 |
| 0.4202 | 177.7778 | 4000 | 0.4133 |
| 0.4223 | 200.0 | 4500 | 0.4145 |
| 0.4127 | 222.2222 | 5000 | 0.4118 |
| 0.418 | 244.4444 | 5500 | 0.4130 |
| 0.4137 | 266.6667 | 6000 | 0.4130 |
| 0.4105 | 288.8889 | 6500 | 0.4127 |
| 0.4164 | 311.1111 | 7000 | 0.4127 |
| 0.4088 | 333.3333 | 7500 | 0.4120 |
| 0.4028 | 355.5556 | 8000 | 0.4118 |
### Framework versions
- Transformers 4.41.0.dev0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "mit", "tags": ["generated_from_trainer"], "base_model": "mikhail-panzo/zlm_b64_le4_s12000", "model-index": [{"name": "zlm-fil_b64_le5_s8000", "results": []}]} | mikhail-panzo/zlm-fil_b64_le5_s8000 | null | [
"transformers",
"tensorboard",
"safetensors",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"base_model:mikhail-panzo/zlm_b64_le4_s12000",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2024-05-01T15:50:15+00:00 |
null | transformers | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
weighted/imatrix quants of https://huggingface.co/KnutJaegersberg/Deita-Mixtral-8x7b
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Deita-Mixtral-8x7b-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Deita-Mixtral-8x7b-i1-GGUF/resolve/main/Deita-Mixtral-8x7b.i1-IQ1_S.gguf) | i1-IQ1_S | 9.9 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Deita-Mixtral-8x7b-i1-GGUF/resolve/main/Deita-Mixtral-8x7b.i1-IQ1_M.gguf) | i1-IQ1_M | 10.9 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Deita-Mixtral-8x7b-i1-GGUF/resolve/main/Deita-Mixtral-8x7b.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 12.7 | |
| [GGUF](https://huggingface.co/mradermacher/Deita-Mixtral-8x7b-i1-GGUF/resolve/main/Deita-Mixtral-8x7b.i1-IQ2_XS.gguf) | i1-IQ2_XS | 14.0 | |
| [GGUF](https://huggingface.co/mradermacher/Deita-Mixtral-8x7b-i1-GGUF/resolve/main/Deita-Mixtral-8x7b.i1-IQ2_S.gguf) | i1-IQ2_S | 14.2 | |
| [GGUF](https://huggingface.co/mradermacher/Deita-Mixtral-8x7b-i1-GGUF/resolve/main/Deita-Mixtral-8x7b.i1-IQ2_M.gguf) | i1-IQ2_M | 15.6 | |
| [GGUF](https://huggingface.co/mradermacher/Deita-Mixtral-8x7b-i1-GGUF/resolve/main/Deita-Mixtral-8x7b.i1-Q2_K.gguf) | i1-Q2_K | 17.4 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Deita-Mixtral-8x7b-i1-GGUF/resolve/main/Deita-Mixtral-8x7b.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 18.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Deita-Mixtral-8x7b-i1-GGUF/resolve/main/Deita-Mixtral-8x7b.i1-IQ3_XS.gguf) | i1-IQ3_XS | 19.4 | |
| [GGUF](https://huggingface.co/mradermacher/Deita-Mixtral-8x7b-i1-GGUF/resolve/main/Deita-Mixtral-8x7b.i1-IQ3_S.gguf) | i1-IQ3_S | 20.5 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Deita-Mixtral-8x7b-i1-GGUF/resolve/main/Deita-Mixtral-8x7b.i1-Q3_K_S.gguf) | i1-Q3_K_S | 20.5 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Deita-Mixtral-8x7b-i1-GGUF/resolve/main/Deita-Mixtral-8x7b.i1-IQ3_M.gguf) | i1-IQ3_M | 21.5 | |
| [GGUF](https://huggingface.co/mradermacher/Deita-Mixtral-8x7b-i1-GGUF/resolve/main/Deita-Mixtral-8x7b.i1-Q3_K_M.gguf) | i1-Q3_K_M | 22.6 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Deita-Mixtral-8x7b-i1-GGUF/resolve/main/Deita-Mixtral-8x7b.i1-Q3_K_L.gguf) | i1-Q3_K_L | 24.3 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Deita-Mixtral-8x7b-i1-GGUF/resolve/main/Deita-Mixtral-8x7b.i1-IQ4_XS.gguf) | i1-IQ4_XS | 25.2 | |
| [GGUF](https://huggingface.co/mradermacher/Deita-Mixtral-8x7b-i1-GGUF/resolve/main/Deita-Mixtral-8x7b.i1-Q4_0.gguf) | i1-Q4_0 | 26.7 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Deita-Mixtral-8x7b-i1-GGUF/resolve/main/Deita-Mixtral-8x7b.i1-Q4_K_S.gguf) | i1-Q4_K_S | 26.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Deita-Mixtral-8x7b-i1-GGUF/resolve/main/Deita-Mixtral-8x7b.i1-Q4_K_M.gguf) | i1-Q4_K_M | 28.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Deita-Mixtral-8x7b-i1-GGUF/resolve/main/Deita-Mixtral-8x7b.i1-Q5_K_S.gguf) | i1-Q5_K_S | 32.3 | |
| [GGUF](https://huggingface.co/mradermacher/Deita-Mixtral-8x7b-i1-GGUF/resolve/main/Deita-Mixtral-8x7b.i1-Q5_K_M.gguf) | i1-Q5_K_M | 33.3 | |
| [GGUF](https://huggingface.co/mradermacher/Deita-Mixtral-8x7b-i1-GGUF/resolve/main/Deita-Mixtral-8x7b.i1-Q6_K.gguf) | i1-Q6_K | 38.5 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
| {"language": ["en"], "license": "apache-2.0", "library_name": "transformers", "base_model": "KnutJaegersberg/Deita-Mixtral-8x7b", "quantized_by": "mradermacher"} | mradermacher/Deita-Mixtral-8x7b-i1-GGUF | null | [
"transformers",
"gguf",
"en",
"base_model:KnutJaegersberg/Deita-Mixtral-8x7b",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-01T15:50:48+00:00 |
null | null | {} | 0x1nsomnia/test_model | null | [
"region:us"
] | null | 2024-05-01T15:50:51+00:00 |
|
text-generation | transformers |
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
``` | {"license": "other", "library_name": "transformers", "tags": ["autotrain", "text-generation-inference", "text-generation", "peft"], "widget": [{"messages": [{"role": "user", "content": "What is your favorite condiment?"}]}]} | Sirion2/autotrain-x6e1x-3rqzd | null | [
"transformers",
"tensorboard",
"safetensors",
"autotrain",
"text-generation-inference",
"text-generation",
"peft",
"conversational",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2024-05-01T15:51:04+00:00 |
text-generation | transformers | {} | AyoubELFallah/llama-2-7b-SEBN | null | [
"transformers",
"pytorch",
"llama",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-01T15:51:20+00:00 |
|
reinforcement-learning | stable-baselines3 |
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga k1101jh -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga k1101jh -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga k1101jh
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
| {"library_name": "stable-baselines3", "tags": ["SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "stable-baselines3"], "model-index": [{"name": "DQN", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "SpaceInvadersNoFrameskip-v4", "type": "SpaceInvadersNoFrameskip-v4"}, "metrics": [{"type": "mean_reward", "value": "820.50 +/- 397.27", "name": "mean_reward", "verified": false}]}]}]} | k1101jh/dqn-SpaceInvadersNoFrameskip-v4 | null | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | null | 2024-05-01T15:51:20+00:00 |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | team-sanai/llama2_7B_pretrain | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-01T15:51:24+00:00 |
null | null | {"license": "openrail"} | art2015/Prostonick | null | [
"license:openrail",
"region:us"
] | null | 2024-05-01T15:52:30+00:00 |
|
automatic-speech-recognition | transformers | {} | truvideo/whisper-large-AIVA | null | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"endpoints_compatible",
"region:us"
] | null | 2024-05-01T15:53:38+00:00 |
|
text-generation | transformers | {} | TitanML/Llama3-OpenBioLLM-70B-AWQ-4bit | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | null | 2024-05-01T15:54:03+00:00 |
|
null | null | {} | Zethearc/swin-tiny-patch4-window7-224-finetuned-eurosat | null | [
"region:us"
] | null | 2024-05-01T15:54:56+00:00 |
|
null | null | {"license": "llama2"} | amirkalateh/e32qr | null | [
"license:llama2",
"region:us"
] | null | 2024-05-01T15:55:11+00:00 |
|
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | slingshot/Meta-Llama-3-8B-Instruct-2024-04-30-18-15-44-predict_next_actions_only_with_masking | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-01T15:55:15+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Poojithpoosa/hatespeechmodel
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.5816
- Validation Loss: 0.5719
- Train Accuracy: 0.7743
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-12, 'decay_steps': 7740, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.5812 | 0.5719 | 0.7743 | 0 |
| 0.5812 | 0.5719 | 0.7743 | 1 |
| 0.5816 | 0.5719 | 0.7743 | 2 |
### Framework versions
- Transformers 4.40.1
- TensorFlow 2.15.0
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "apache-2.0", "tags": ["generated_from_keras_callback"], "base_model": "distilbert/distilbert-base-uncased", "model-index": [{"name": "Poojithpoosa/hatespeechmodel", "results": []}]} | Poojithpoosa/hatespeechmodel | null | [
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"base_model:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-01T15:55:41+00:00 |
reinforcement-learning | stable-baselines3 |
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
| {"library_name": "stable-baselines3", "tags": ["LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "stable-baselines3"], "model-index": [{"name": "PPO", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "LunarLander-v2", "type": "LunarLander-v2"}, "metrics": [{"type": "mean_reward", "value": "238.97 +/- 18.44", "name": "mean_reward", "verified": false}]}]}]} | Ctdunn/ppo-LunarLander-v2 | null | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | null | 2024-05-01T15:56:13+00:00 |
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 0.001_withdpo_4iters_bs256_432lr_iter_3
This model is a fine-tuned version of [ShenaoZ/0.001_withdpo_4iters_bs256_432lr_iter_2](https://huggingface.co/ShenaoZ/0.001_withdpo_4iters_bs256_432lr_iter_2) on the updated and the original datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.2
| {"license": "mit", "tags": ["alignment-handbook", "generated_from_trainer", "trl", "dpo", "generated_from_trainer"], "datasets": ["updated", "original"], "base_model": "ShenaoZ/0.001_withdpo_4iters_bs256_432lr_iter_2", "model-index": [{"name": "0.001_withdpo_4iters_bs256_432lr_iter_3", "results": []}]} | ShenaoZ/0.001_withdpo_4iters_bs256_432lr_iter_3 | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"alignment-handbook",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"dataset:updated",
"dataset:original",
"base_model:ShenaoZ/0.001_withdpo_4iters_bs256_432lr_iter_2",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-01T15:56:15+00:00 |
table-question-answering | fasttext | {"language": ["fa"], "license": "llama2", "library_name": "fasttext", "datasets": ["HuggingFaceFW/fineweb"], "metrics": ["character"], "pipeline_tag": "table-question-answering"} | amirkalateh/eqwdwad | null | [
"fasttext",
"table-question-answering",
"fa",
"dataset:HuggingFaceFW/fineweb",
"license:llama2",
"region:us"
] | null | 2024-05-01T15:56:20+00:00 |
|
question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# QA
This model is a fine-tuned version of [google-t5/t5-small](https://huggingface.co/google-t5/t5-small) on the Squad V2 Dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2978
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.6544 | 1.0 | 8321 | 1.4161 |
| 1.47 | 2.0 | 16642 | 1.3316 |
| 1.4079 | 3.0 | 24963 | 1.2978 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "google-t5/t5-small", "model-index": [{"name": "QA", "results": []}]} | DiDiR6/T5-QA | null | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"question-answering",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"license:apache-2.0",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-01T15:56:53+00:00 |
null | null | {} | Nomad12/te | null | [
"region:us"
] | null | 2024-05-01T16:01:18+00:00 |
|
null | null | {} | Nomad12/test | null | [
"region:us"
] | null | 2024-05-01T16:02:35+00:00 |
|
null | null | {"license": "unknown"} | hautc/z2 | null | [
"license:unknown",
"region:us"
] | null | 2024-05-01T16:02:38+00:00 |
|
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | XsoraS/xgpt_chat | null | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-01T16:02:43+00:00 |
null | null | {"license": "unknown"} | hautc/w1 | null | [
"license:unknown",
"region:us"
] | null | 2024-05-01T16:02:51+00:00 |
|
text-classification | transformers | {} | Youssef1234/test | null | [
"transformers",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-01T16:03:27+00:00 |
|
null | null | {} | squaadinc/1714579355803x587981696952696800 | null | [
"region:us"
] | null | 2024-05-01T16:04:11+00:00 |
|
null | null | {} | minhquy1624/model-education-v3 | null | [
"safetensors",
"region:us"
] | null | 2024-05-01T16:04:39+00:00 |
|
text-generation | transformers | # model
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
### Merge Method
This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [Locutusque/llama-3-neural-chat-v1-8b](https://huggingface.co/Locutusque/llama-3-neural-chat-v1-8b) as a base.
### Models Merged
The following models were included in the merge:
* [cognitivecomputations/dolphin-2.9-llama3-8b](https://huggingface.co/cognitivecomputations/dolphin-2.9-llama3-8b)
* [Weyaxi/Einstein-v6.1-Llama3-8B](https://huggingface.co/Weyaxi/Einstein-v6.1-Llama3-8B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
base_model: Locutusque/llama-3-neural-chat-v1-8b
dtype: bfloat16
merge_method: dare_ties
parameters:
int8_mask: 1.0
normalize: 0.0
slices:
- sources:
- layer_range: [0, 4]
model: cognitivecomputations/dolphin-2.9-llama3-8b
parameters:
density: 1.0
weight: 0.6
- layer_range: [0, 4]
model: Weyaxi/Einstein-v6.1-Llama3-8B
parameters:
density: 0.6
weight: 0.5
- layer_range: [0, 4]
model: Locutusque/llama-3-neural-chat-v1-8b
parameters:
density: 1.0
weight: 0.5
- sources:
- layer_range: [4, 8]
model: cognitivecomputations/dolphin-2.9-llama3-8b
parameters:
density: 0.8
weight: 0.1
- layer_range: [4, 8]
model: Weyaxi/Einstein-v6.1-Llama3-8B
parameters:
density: 1.0
weight: 0.2
- layer_range: [4, 8]
model: Locutusque/llama-3-neural-chat-v1-8b
parameters:
density: 1.0
weight: 0.7
- sources:
- layer_range: [8, 12]
model: cognitivecomputations/dolphin-2.9-llama3-8b
parameters:
density: 0.7
weight: 0.1
- layer_range: [8, 12]
model: Weyaxi/Einstein-v6.1-Llama3-8B
parameters:
density: 0.7
weight: 0.2
- layer_range: [8, 12]
model: Locutusque/llama-3-neural-chat-v1-8b
parameters:
density: 0.7
weight: 0.6
- sources:
- layer_range: [12, 16]
model: cognitivecomputations/dolphin-2.9-llama3-8b
parameters:
density: 0.9
weight: 0.2
- layer_range: [12, 16]
model: Weyaxi/Einstein-v6.1-Llama3-8B
parameters:
density: 0.6
weight: 0.6
- layer_range: [12, 16]
model: Locutusque/llama-3-neural-chat-v1-8b
parameters:
density: 0.7
weight: 0.3
- sources:
- layer_range: [16, 20]
model: cognitivecomputations/dolphin-2.9-llama3-8b
parameters:
density: 1.0
weight: 0.2
- layer_range: [16, 20]
model: Weyaxi/Einstein-v6.1-Llama3-8B
parameters:
density: 1.0
weight: 0.2
- layer_range: [16, 20]
model: Locutusque/llama-3-neural-chat-v1-8b
parameters:
density: 0.9
weight: 0.4
- sources:
- layer_range: [20, 24]
model: cognitivecomputations/dolphin-2.9-llama3-8b
parameters:
density: 0.7
weight: 0.2
- layer_range: [20, 24]
model: Weyaxi/Einstein-v6.1-Llama3-8B
parameters:
density: 0.9
weight: 0.3
- layer_range: [20, 24]
model: Locutusque/llama-3-neural-chat-v1-8b
parameters:
density: 1.0
weight: 0.4
- sources:
- layer_range: [24, 28]
model: cognitivecomputations/dolphin-2.9-llama3-8b
parameters:
density: 1.0
weight: 0.4
- layer_range: [24, 28]
model: Weyaxi/Einstein-v6.1-Llama3-8B
parameters:
density: 0.8
weight: 0.2
- layer_range: [24, 28]
model: Locutusque/llama-3-neural-chat-v1-8b
parameters:
density: 0.9
weight: 0.4
- sources:
- layer_range: [28, 32]
model: cognitivecomputations/dolphin-2.9-llama3-8b
parameters:
density: 1.0
weight: 0.3
- layer_range: [28, 32]
model: Weyaxi/Einstein-v6.1-Llama3-8B
parameters:
density: 0.9
weight: 0.2
- layer_range: [28, 32]
model: Locutusque/llama-3-neural-chat-v1-8b
parameters:
density: 1.0
weight: 0.3
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_aloobun__CosmicBun-8B)
| Metric |Value|
|---------------------------------|----:|
|Avg. |68.81|
|AI2 Reasoning Challenge (25-Shot)|61.86|
|HellaSwag (10-Shot) |84.29|
|MMLU (5-Shot) |65.53|
|TruthfulQA (0-shot) |54.08|
|Winogrande (5-shot) |78.85|
|GSM8k (5-shot) |68.23|
| {"license": "mit", "library_name": "transformers", "tags": ["mergekit", "merge", "math", "llama3", "physics", "chemistry", "biology", "dolphin"], "base_model": ["cognitivecomputations/dolphin-2.9-llama3-8b", "Weyaxi/Einstein-v6.1-Llama3-8B", "Locutusque/llama-3-neural-chat-v1-8b"], "model-index": [{"name": "CosmicBun-8B", "results": [{"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "AI2 Reasoning Challenge (25-Shot)", "type": "ai2_arc", "config": "ARC-Challenge", "split": "test", "args": {"num_few_shot": 25}}, "metrics": [{"type": "acc_norm", "value": 61.86, "name": "normalized accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=aloobun/CosmicBun-8B", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "HellaSwag (10-Shot)", "type": "hellaswag", "split": "validation", "args": {"num_few_shot": 10}}, "metrics": [{"type": "acc_norm", "value": 84.29, "name": "normalized accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=aloobun/CosmicBun-8B", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "MMLU (5-Shot)", "type": "cais/mmlu", "config": "all", "split": "test", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 65.53, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=aloobun/CosmicBun-8B", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "TruthfulQA (0-shot)", "type": "truthful_qa", "config": "multiple_choice", "split": "validation", "args": {"num_few_shot": 0}}, "metrics": [{"type": "mc2", "value": 54.08}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=aloobun/CosmicBun-8B", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "Winogrande (5-shot)", "type": "winogrande", "config": "winogrande_xl", "split": "validation", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 78.85, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=aloobun/CosmicBun-8B", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "GSM8k (5-shot)", "type": "gsm8k", "config": "main", "split": "test", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 68.23, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=aloobun/CosmicBun-8B", "name": "Open LLM Leaderboard"}}]}]} | aloobun/CosmicBun-8B | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"math",
"llama3",
"physics",
"chemistry",
"biology",
"dolphin",
"conversational",
"arxiv:2311.03099",
"arxiv:2306.01708",
"base_model:cognitivecomputations/dolphin-2.9-llama3-8b",
"base_model:Weyaxi/Einstein-v6.1-Llama3-8B",
"base_model:Locutusque/llama-3-neural-chat-v1-8b",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null | 2024-05-01T16:06:34+00:00 |
automatic-speech-recognition | transformers | {} | JensCoet/whisper-small-nl | null | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"endpoints_compatible",
"region:us"
] | null | 2024-05-01T16:06:45+00:00 |
|
null | null | {} | squaadinc/1714579585145x227550015770853380 | null | [
"region:us"
] | null | 2024-05-01T16:08:01+00:00 |
|
null | null | {} | NKASG/img_cls | null | [
"region:us"
] | null | 2024-05-01T16:09:41+00:00 |
|
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | saransh03sharma/mintrec2-llama-2-7b-150 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-01T16:10:04+00:00 |
null | null | {} | danijelanadj/danijela | null | [
"region:us"
] | null | 2024-05-01T16:10:51+00:00 |
|
text-classification | transformers | {} | sstoia/CheckThat2024_stratified_sigmoidreweighting_roberta | null | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-01T16:11:03+00:00 |
|
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | Weblet/phi-2-turbo17145797628529394_mlabonne-guanaco-llama2-1k_train | null | [
"transformers",
"safetensors",
"phi",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-01T16:11:24+00:00 |
null | null | {} | squaadinc/1714579798255x107217338714030080 | null | [
"region:us"
] | null | 2024-05-01T16:11:35+00:00 |
|
null | null |
# Ognoexperiment27multi_verse_modelShadowm7exp-7B
Ognoexperiment27multi_verse_modelShadowm7exp-7B is an automated merge created by [Maxime Labonne](https://huggingface.co/mlabonne) using the following configuration.
## 🧩 Configuration
```yaml
models:
- model: mistralai/Mistral-7B-v0.1
- model: automerger/Ognoexperiment27Multi_verse_model-7B
- model: mahiatlinux/ShadowM7EXP-7B
merge_method: model_stock
base_model: mistralai/Mistral-7B-v0.1
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "automerger/Ognoexperiment27multi_verse_modelShadowm7exp-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` | {"license": "apache-2.0", "tags": ["merge", "mergekit", "lazymergekit", "automerger"]} | automerger/Ognoexperiment27multi_verse_modelShadowm7exp-7B | null | [
"merge",
"mergekit",
"lazymergekit",
"automerger",
"license:apache-2.0",
"region:us"
] | null | 2024-05-01T16:12:28+00:00 |
null | null | {} | squaadinc/1714579798255x10721733871403008 | null | [
"region:us"
] | null | 2024-05-01T16:12:31+00:00 |
|
null | transformers |
# Uploaded model
- **Developed by:** pmeyhoefer
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
Finetuned with JobRouter dataset
| {"language": ["de"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "unsloth/llama-3-8b-bnb-4bit"} | pmeyhoefer/jobrouterps | null | [
"transformers",
"safetensors",
"gguf",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"de",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-01T16:12:50+00:00 |
null | null | {} | squaadinc/cc | null | [
"region:us"
] | null | 2024-05-01T16:13:42+00:00 |
|
text-generation | null |
## Llamacpp imatrix Quantizations of Llama-3-8B-LexiFun-Uncensored-V1
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b2777">b2777</a> for quantization.
Original model: https://huggingface.co/Orenguteng/Llama-3-8B-LexiFun-Uncensored-V1
All quants made using imatrix option with dataset provided by Kalomaze [here](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384)
## Prompt format
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
{system_prompt}<|end_of_text|><|start_header_id|>user<|end_header_id|>
{prompt}<|end_of_text|><|start_header_id|>assistant<|end_header_id|>
```
## Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Llama-3-8B-LexiFun-Uncensored-V1-Q8_0.gguf](https://huggingface.co/bartowski/Llama-3-8B-LexiFun-Uncensored-V1-GGUF/blob/main/Llama-3-8B-LexiFun-Uncensored-V1-Q8_0.gguf) | Q8_0 | 8.54GB | Extremely high quality, generally unneeded but max available quant. |
| [Llama-3-8B-LexiFun-Uncensored-V1-Q6_K.gguf](https://huggingface.co/bartowski/Llama-3-8B-LexiFun-Uncensored-V1-GGUF/blob/main/Llama-3-8B-LexiFun-Uncensored-V1-Q6_K.gguf) | Q6_K | 6.59GB | Very high quality, near perfect, *recommended*. |
| [Llama-3-8B-LexiFun-Uncensored-V1-Q5_K_M.gguf](https://huggingface.co/bartowski/Llama-3-8B-LexiFun-Uncensored-V1-GGUF/blob/main/Llama-3-8B-LexiFun-Uncensored-V1-Q5_K_M.gguf) | Q5_K_M | 5.73GB | High quality, *recommended*. |
| [Llama-3-8B-LexiFun-Uncensored-V1-Q5_K_S.gguf](https://huggingface.co/bartowski/Llama-3-8B-LexiFun-Uncensored-V1-GGUF/blob/main/Llama-3-8B-LexiFun-Uncensored-V1-Q5_K_S.gguf) | Q5_K_S | 5.59GB | High quality, *recommended*. |
| [Llama-3-8B-LexiFun-Uncensored-V1-Q4_K_M.gguf](https://huggingface.co/bartowski/Llama-3-8B-LexiFun-Uncensored-V1-GGUF/blob/main/Llama-3-8B-LexiFun-Uncensored-V1-Q4_K_M.gguf) | Q4_K_M | 4.92GB | Good quality, uses about 4.83 bits per weight, *recommended*. |
| [Llama-3-8B-LexiFun-Uncensored-V1-Q4_K_S.gguf](https://huggingface.co/bartowski/Llama-3-8B-LexiFun-Uncensored-V1-GGUF/blob/main/Llama-3-8B-LexiFun-Uncensored-V1-Q4_K_S.gguf) | Q4_K_S | 4.69GB | Slightly lower quality with more space savings, *recommended*. |
| [Llama-3-8B-LexiFun-Uncensored-V1-IQ4_NL.gguf](https://huggingface.co/bartowski/Llama-3-8B-LexiFun-Uncensored-V1-GGUF/blob/main/Llama-3-8B-LexiFun-Uncensored-V1-IQ4_NL.gguf) | IQ4_NL | 4.67GB | Decent quality, slightly smaller than Q4_K_S with similar performance *recommended*. |
| [Llama-3-8B-LexiFun-Uncensored-V1-IQ4_XS.gguf](https://huggingface.co/bartowski/Llama-3-8B-LexiFun-Uncensored-V1-GGUF/blob/main/Llama-3-8B-LexiFun-Uncensored-V1-IQ4_XS.gguf) | IQ4_XS | 4.44GB | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
| [Llama-3-8B-LexiFun-Uncensored-V1-Q3_K_L.gguf](https://huggingface.co/bartowski/Llama-3-8B-LexiFun-Uncensored-V1-GGUF/blob/main/Llama-3-8B-LexiFun-Uncensored-V1-Q3_K_L.gguf) | Q3_K_L | 4.32GB | Lower quality but usable, good for low RAM availability. |
| [Llama-3-8B-LexiFun-Uncensored-V1-Q3_K_M.gguf](https://huggingface.co/bartowski/Llama-3-8B-LexiFun-Uncensored-V1-GGUF/blob/main/Llama-3-8B-LexiFun-Uncensored-V1-Q3_K_M.gguf) | Q3_K_M | 4.01GB | Even lower quality. |
| [Llama-3-8B-LexiFun-Uncensored-V1-IQ3_M.gguf](https://huggingface.co/bartowski/Llama-3-8B-LexiFun-Uncensored-V1-GGUF/blob/main/Llama-3-8B-LexiFun-Uncensored-V1-IQ3_M.gguf) | IQ3_M | 3.78GB | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| [Llama-3-8B-LexiFun-Uncensored-V1-IQ3_S.gguf](https://huggingface.co/bartowski/Llama-3-8B-LexiFun-Uncensored-V1-GGUF/blob/main/Llama-3-8B-LexiFun-Uncensored-V1-IQ3_S.gguf) | IQ3_S | 3.68GB | Lower quality, new method with decent performance, recommended over Q3_K_S quant, same size with better performance. |
| [Llama-3-8B-LexiFun-Uncensored-V1-Q3_K_S.gguf](https://huggingface.co/bartowski/Llama-3-8B-LexiFun-Uncensored-V1-GGUF/blob/main/Llama-3-8B-LexiFun-Uncensored-V1-Q3_K_S.gguf) | Q3_K_S | 3.66GB | Low quality, not recommended. |
| [Llama-3-8B-LexiFun-Uncensored-V1-IQ3_XS.gguf](https://huggingface.co/bartowski/Llama-3-8B-LexiFun-Uncensored-V1-GGUF/blob/main/Llama-3-8B-LexiFun-Uncensored-V1-IQ3_XS.gguf) | IQ3_XS | 3.51GB | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| [Llama-3-8B-LexiFun-Uncensored-V1-IQ3_XXS.gguf](https://huggingface.co/bartowski/Llama-3-8B-LexiFun-Uncensored-V1-GGUF/blob/main/Llama-3-8B-LexiFun-Uncensored-V1-IQ3_XXS.gguf) | IQ3_XXS | 3.27GB | Lower quality, new method with decent performance, comparable to Q3 quants. |
| [Llama-3-8B-LexiFun-Uncensored-V1-Q2_K.gguf](https://huggingface.co/bartowski/Llama-3-8B-LexiFun-Uncensored-V1-GGUF/blob/main/Llama-3-8B-LexiFun-Uncensored-V1-Q2_K.gguf) | Q2_K | 3.17GB | Very low quality but surprisingly usable. |
| [Llama-3-8B-LexiFun-Uncensored-V1-IQ2_M.gguf](https://huggingface.co/bartowski/Llama-3-8B-LexiFun-Uncensored-V1-GGUF/blob/main/Llama-3-8B-LexiFun-Uncensored-V1-IQ2_M.gguf) | IQ2_M | 2.94GB | Very low quality, uses SOTA techniques to also be surprisingly usable. |
| [Llama-3-8B-LexiFun-Uncensored-V1-IQ2_S.gguf](https://huggingface.co/bartowski/Llama-3-8B-LexiFun-Uncensored-V1-GGUF/blob/main/Llama-3-8B-LexiFun-Uncensored-V1-IQ2_S.gguf) | IQ2_S | 2.75GB | Very low quality, uses SOTA techniques to be usable. |
| [Llama-3-8B-LexiFun-Uncensored-V1-IQ2_XS.gguf](https://huggingface.co/bartowski/Llama-3-8B-LexiFun-Uncensored-V1-GGUF/blob/main/Llama-3-8B-LexiFun-Uncensored-V1-IQ2_XS.gguf) | IQ2_XS | 2.60GB | Very low quality, uses SOTA techniques to be usable. |
| [Llama-3-8B-LexiFun-Uncensored-V1-IQ2_XXS.gguf](https://huggingface.co/bartowski/Llama-3-8B-LexiFun-Uncensored-V1-GGUF/blob/main/Llama-3-8B-LexiFun-Uncensored-V1-IQ2_XXS.gguf) | IQ2_XXS | 2.39GB | Lower quality, uses SOTA techniques to be usable. |
| [Llama-3-8B-LexiFun-Uncensored-V1-IQ1_M.gguf](https://huggingface.co/bartowski/Llama-3-8B-LexiFun-Uncensored-V1-GGUF/blob/main/Llama-3-8B-LexiFun-Uncensored-V1-IQ1_M.gguf) | IQ1_M | 2.16GB | Extremely low quality, *not* recommended. |
| [Llama-3-8B-LexiFun-Uncensored-V1-IQ1_S.gguf](https://huggingface.co/bartowski/Llama-3-8B-LexiFun-Uncensored-V1-GGUF/blob/main/Llama-3-8B-LexiFun-Uncensored-V1-IQ1_S.gguf) | IQ1_S | 2.01GB | Extremely low quality, *not* recommended. |
## Which file should I choose?
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
[llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm.
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
| {"language": ["en"], "license": "other", "tags": ["llama3", "comedy", "comedian", "fun", "funny", "llama38b", "laugh", "sarcasm", "roleplay"], "license_name": "llama3", "license_link": "https://llama.meta.com/llama3/license/", "quantized_by": "bartowski", "pipeline_tag": "text-generation"} | bartowski/Llama-3-8B-LexiFun-Uncensored-V1-GGUF | null | [
"gguf",
"llama3",
"comedy",
"comedian",
"fun",
"funny",
"llama38b",
"laugh",
"sarcasm",
"roleplay",
"text-generation",
"en",
"license:other",
"region:us"
] | null | 2024-05-01T16:13:51+00:00 |
null | null | {} | ASHWINKUMARBR/test | null | [
"region:us"
] | null | 2024-05-01T16:13:58+00:00 |
|
question-answering | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | Dofla/roberta-base | null | [
"transformers",
"safetensors",
"roberta",
"question-answering",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us",
"has_space"
] | null | 2024-05-01T16:14:12+00:00 |
null | null | {"license": "mit"} | briareus2000/model | null | [
"license:mit",
"region:us"
] | null | 2024-05-01T16:14:18+00:00 |
|
null | null | {} | squaadinc/disney | null | [
"region:us"
] | null | 2024-05-01T16:14:35+00:00 |
|
null | null | {} | squaadinc/cc1 | null | [
"region:us"
] | null | 2024-05-01T16:15:12+00:00 |
|
null | null | {} | squaadinc/disney1 | null | [
"region:us"
] | null | 2024-05-01T16:15:16+00:00 |
|
null | transformers | {} | skim-wmt24/ct2-labse | null | [
"transformers",
"endpoints_compatible",
"region:us"
] | null | 2024-05-01T16:16:02+00:00 |
|
null | null | {"license": "cc-by-nc-2.0"} | DJ491/AlexandraStan | null | [
"license:cc-by-nc-2.0",
"region:us"
] | null | 2024-05-01T16:17:31+00:00 |
|
null | null | {} | squaadinc/tt | null | [
"region:us"
] | null | 2024-05-01T16:18:12+00:00 |
|
null | null | {} | jdorairaj/Experiments_Adapters | null | [
"region:us"
] | null | 2024-05-01T16:18:35+00:00 |
|
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test_LLama
This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7219
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 128
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.7234 | 1.0 | 132 | 1.7219 |
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1 | {"license": "apache-2.0", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "datasets": ["generator"], "base_model": "TinyLlama/TinyLlama-1.1B-Chat-v1.0", "model-index": [{"name": "test_LLama", "results": []}]} | werent4/test_LLama | null | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"license:apache-2.0",
"region:us"
] | null | 2024-05-01T16:19:03+00:00 |
null | null | {"license": "apache-2.0"} | zapod/RSNA_VGG16 | null | [
"license:apache-2.0",
"region:us"
] | null | 2024-05-01T16:19:34+00:00 |
|
null | null | {} | squaadinc/1714580309842x406888836145610750 | null | [
"region:us"
] | null | 2024-05-01T16:20:04+00:00 |
|
text-generation | transformers | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Boreas-7B-chat - bnb 4bits
- Model creator: https://huggingface.co/yhavinga/
- Original model: https://huggingface.co/yhavinga/Boreas-7B-chat/
Original model description:
---
library_name: transformers
license: apache-2.0
pipeline_tag: text-generation
tags:
- finetuned
inference: true
widget:
- messages:
- role: system
content: Je bent een behulpzame Nederlandse AI-assistent.
- role: user
content: Is Nederlandse wijn lekker?
datasets:
- yhavinga/mc4_nl_cleaned
- yhavinga/nedd_wiki_news
- teknium/OpenHermes-2.5
- euirim/goodwiki
- philschmid/flanv2
---
# Boreas
**NB: 20240430 model card is WIP - evaluations / example generations to be added**

Boreas-7B is een Nederlands/Engels taalmodel gebaseerd op Mistral-7B.
Het is getraind op 10 miljard tokens aan Nederlandse en Engelse tekst.
Boreas-7B-chat is verder getraind op instructie- en chat data.
* Boreas-7B is vergelijkbaar met [GEITje-7B](https://huggingface.co/Rijgersberg/GEITje-7B) in die zin dat
het ook een model is dat verder getraind is op Mistral-7B, met een evenzogrote hoeveelheid tokens (10B).
* Boreas-7B-chat is vergelijkbaar met [GEITje-7B-chat](https://huggingface.co/Rijgersberg/GEITje-7B-chat) en [GEITJE-7B-ultra-sft](https://huggingface.co/BramVanroy/GEITje-7B-ultra-sft), in die zin dat het ook een
fine-tune is op een chat dataset.
Edwin Rijgersberg heeft [uitgebreide documentatie](https://github.com/Rijgersberg/GEITje/blob/main/README.md) geschreven voor het gebruik van GEITje,
en deze is ook van toepassing op Boreas.
De voornaamste verschillen tussen Boreas en GEITje zijn:
* Boreas is getraind met een context lengte van 2048 tokens, GEITje met 8192 tokens.
* Boreas is getraind op een mix van Engels en Nederlands, waar GEITje alleen op voornamelijk Nederlands getraind is.
## Gebruik met ollama
Kies een GGUF quant van [Boreas-7B-chat-v1-GGUF](https://huggingface.co/yhavinga/Boreas-7B-chat-v1-GGUF)
en volg de instructies daar.
Belangrijk: gebruik een system prompt, anders zijn de resultaten matig.
## Doel Boreas
Creatie van een taalmodel dat wat betreft het Nederlandse gedeelte, niet getraind is op teksten gegeneerd door
een LLM. Dus geen Nederlandse chats gegeneerd door een LLM, en ook geen datasets die vertaald zijn uit het Engels door een LLM.
Er is niet streng gefilterd op LLM-gegenereerde tekst, maar dit is wel een aandachtspunt geweest bij het samenstellen van de datasets.
Bij het finetunen van Boreas-chat zijn toch 3% van de tokens Nederlandse chats gegeneerd door een LLM.
Dit is een kleine dataset die gemaakt is op basis van Nederlandse bronteksten, en een steekproef heeft uitgewezen
dat deze data van een goede kwaliteit is.
Het uiteindelijke chat model is getraind op een mix van voornamelijk:
1. Openhermes-2.5: grote, diverse Engelse chat dataset (~45%)
2. Een engels naar nederlands vertaal dataset (~34%)
3. Pre-train data: verder met Wiki en boeken (~12%)
4. Nederlandse wiki en boeken q en a (~3%)
Het Boreas model kan dus beschouwd worden als een test voor knowledge transfer van Engels naar Nederlands.
## Boreas-7B basismodel
Het basismodel is op Mistral-7B doorgetraind op 10 miljard tokens.
De dataset is samengesteld uit verschillende bronnen in zowel Nederlands en Engels:
| Datasetnaam | Aantal Tokens | Percentage Tokens (%) |
|------------------------------------------------|---------------|-----------------------|
| Nederlandse romans | 3401M | 34.01 |
| Nederlandse Wikipedia | 2381M | 23.81 |
| mc4_nl_cleaned (Nederlands) | 1361M | 13.61 |
| Nederlands nieuws | 1361M | 13.61 |
| Nederlandse schoolboeken | 136M | 1.36 |
| Engelse romans | 340M | 3.40 |
| Engelse Wikipedia (euirim/goodwiki) | 340M | 3.40 |
| Engelse wiskunde- en natuurkundeboeken | 340M | 3.40 |
| Engelse instructiedataset (philschmid/flanv2) | 340M | 3.40 |
De keuze voor deze mix is gebaseerd op zowel beschikbaarheid van data als de volgende overwegingen:
* Veel Nederlands van hoge kwaliteit, teksten primair in het Nederlands geschreven door mensen. Hieruit volgt de keuze
voor romans, Wikipedia en nieuwsartikelen, maar bijvoorbeeld uitsluiten van forum-/twitterberichten en wetteksten.
* Erin mixen van de originele dataset voor ~5%, om ervoor te zorgen dat een model haar originele kennis niet verliest.
Het is niet bekend op welke data Mistral getraind is, maar het is aannemelijk dat er kwalitatief goed Engels en ook instructiedata in verwerkt is. Daarom is voor de ~3% aan Engelse boeken, Wikipedia en ook instructiedata gekozen.
* Zoveel mogelijk uitsluiten van door LLM's gegenereerde teksten in de pre-train fase. Bij veel datasets, vooral Nederlands, valt me op dat
de vertalingen of generaties van een slechte kwaliteit zijn. Daarom is gekozen voor datasets waarvan de brondata
pre ChatGPT tijdperk zijn, (dus voor November 2022).
* mc4_nl_cleaned - de bron van deze dataset is mC4 - deduplicated data van Common
Crawl, en gefiltered op bad-words en andere bewerkingen volgens het recept van de T5 auteurs voor de Engelse C4 dataset. In diverse ablations blijkt C4 een goede pre-train dataset, daarom is mc4_nl_cleaned ook voor dit model gebruikt.
* Er is geen sourcecode in gemixt - ik verwacht niet dat een 7B model ooit code kan genereren dat bruikbaar is.
Misschien helpt het bij logisch redeneer-puzzels, maar ook daarvoor verwacht ik dat een 7B model dit nooit zo goed
zal kunnen of generaliseren als grotere modellen.
Bij het pre-trainen zijn de brontexten gepackt in blokken van 2048 tokens. Hierbij is waar mogelijk geprobeerd om alleen
teksten te packen die bij elkaar passen. Ook worden kleine fragmenten weggegooid, zodat we
bijvoorbeeld nooit een fragment krijgen dat begint met een paar tokens van het einde van een wikipedia artikel, om daarna met een ander wikipedia artikel te beginnen. Dit is gedaan om 'cross sequence' ruis binnen 1 example zoveel mogelijk te voorkomen. Pas na packing zijn de examples geshuffled.
## Pre-training
* Boreas was pre-trained with the [EasyDeL JAX framework](https://github.com/erfanzar/EasyDel) on a tpu-v4-32
kindly supplied by the Google [TPU Research Cloud](https://sites.research.google/trc/about/).
* Batch size 96, gradient accumulation steps 2
* Using flash attention, block size of 512
* Max sequence length of 2048
* LION optimizer, triangle learning rate schedule with max lr 3e-6, gradient clipping to 1.0



<!-- [https://wandb.ai/yepster/EasyDeL-MistralBoreas/runs/ozw55qaq/workspace?nw=nwuseryepster](WandB Boreas 7B pre-train) -->
## Boreas-7B-chat
Het chat LLM model is net als het basismodel getraind op een mix van datasets, met een grootte van 4.7B tokens.
Het is een full finetune, dus geen LoRA finetune.
De volgende datasets zijn gemixt:
| Datasetnaam | Gewicht | Percentage Tokens (%) |
|-----------------------------------------------------------|---------|-----------------------|
| (C) Diverse Engelse chat dataset (teknium/OpenHermes-2.5) | 200 | 45.15 |
| (C) Vertaal en->nl paragrafen (romans) | 100 | 22.57 |
| (C) Vertaal en->nl zinnen (romans) | 50 | 11.29 |
| (P) Nederlandse wikipedia | 30 | 6.77 |
| (P) Engelse wiskunde en natuurkunde boeken | 25 | 5.64 |
| (C) Engelse instruct dataset (philschmid/flanv2) | 20 | 4.51 |
| (C) Nederlandse wiki q en a | 12 | 2.71 |
| (C) Nederlandse schoolboeken q en a | 3 | 0.68 |
| (P) Nederlandse schoolboeken | 2 | 0.45 |
| (C) Vertaal en->nl uitdrukkingen (dictionary) | 1 | 0.23 |
(C) geeft aan dat de tekst geformatteerd is voor chat, (P) is ongeformatteerde tekst (gelijk aan de pre-train fase)
Het grootste gedeelte bestaat uit `teknium/OpenHermes-2.5` - wat op zichzelf ook weer een amalgamaat van diverse
gefilterde chat/instruct datasets is. Deze dataset bevat wel programmacode data, wat ertoe resulteert dat Boreas-7B-chat
wel in staat is om simpele programmavragen te beantwoorden.
De reden om zoveel Engels in de dataset te mixen, is met name om de diversiteit in de dataset zo hoog mogelijk te
krijgen, en omdat ik verwacht dat er een behoorlijke mate van cross language en naar nl knowledge transfer mogelijk is.
Het omgekeerde is zeker waar: als een fine-tune dataset niet divers is, zal het model door zijn fine-tuning niet in
staat zijn om zijn originele kunde uit te voeren. Een van de eerste Mistral finetunes die ik gemaakt heb was gefinetuned
op alleen en->nl vertalen. Dat model kon uiteindelijk niets anders meer dan vertalen naar Nederlands.
In tegenstelling tot het basismodel is het chat model _wel_ getrained op LLM-gegenereerde teksten - hierbij zijn de volgende
overwegingen van toepassing: Bij de Nederlandse gegenereerde chats heb ik wederom geprobeerd om zoveel mogelijk origineel
Nederlands taalgebruik te 'guiden' door alleen vragen en antwoorden te genereren op basis van teksten die origineel in
het Nederlands geschreven zijn door een persoon. Dit zijn de Nederlandse wiki q en a en Nederlandse schoolboeken q en a
chat datasets. Hierdoor wordt er zoveel mogelijk voor gezorgd dat bij bijvoorbeeld educatie-achtige q en a, de in onze
regio gebruikelijke termen en eenheden voorkomen in de chat database, tenminste voor de Nederlandstalige chats.
Bij alle chat datasets is er alleen getraind op de assistant-completion tokens.
## Fine-tuning
* Boreas was fine-tuned with the [EasyDeL JAX framework](https://github.com/erfanzar/EasyDel) on a tpu-v4-32
kindly supplied by the Google [TPU Research Cloud](https://sites.research.google/trc/about/).
* Batch size 96, gradient accumulation 2,
* Using flash attention, block size of 512
* Max sequence length of 2048
* LION optimizer, triangle learning rate schedule with max lr 2e-6, gradient clipping to 1.0 (NB: the schedule was not finished due to an error at the end of the dataset epoch. Since the loss had plateaued I decided then to not resume for another epoch)



<!-- [https://wandb.ai/yepster/EasyDeL-MistralBoreas/runs/ynkl2jtx?nw=nwuseryepster](WandB Boreas 7B chat finetune) -->
| {} | RichardErkhov/yhavinga_-_Boreas-7B-chat-4bits | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | null | 2024-05-01T16:20:32+00:00 |
text-to-image | diffusers | <h1><b>EN</b></h1>
<center><h1>Model trained from Antonio Caramia art</h1></center>
<p>Use the keyword "Antonio Caramia style"</p>
<p>Trained with Fast_DreamBooth on Google Colab at <b>https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb</b> with over 80 images</p>
<p>Art from: <b>https://www.antoniocaramia.it/</b></p>
<br></br>
<h1><b>IT</b></h1>
<center><h2><i>Modello addestrato dalle opere di Antonio Caramia</i></h2></center>
<p><i>Usa la parola chiave "Antonio Caramia style"</i></p>
<p><i>Creato con Fast_DreamBooth su Google collab al link <b>https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb</i></b></p>
<p><i>Arte presa da: <b>https://www.antoniocaramia.it/</b></i></p> | {"language": ["en", "it"], "license": "unknown", "tags": ["art", "text-to-image", "stable-diffusion", "lora", "diffusers", "template:sd-lora"], "pipeline_tag": "text-to-image", "base_model": "runwayml/stable-diffusion-v1-5"} | rob80fame/caramia_style | null | [
"diffusers",
"art",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"en",
"it",
"base_model:runwayml/stable-diffusion-v1-5",
"license:unknown",
"region:us"
] | null | 2024-05-01T16:20:59+00:00 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.