Search is not available for this dataset
pipeline_tag
stringclasses 48
values | library_name
stringclasses 205
values | text
stringlengths 0
18.3M
| metadata
stringlengths 2
1.07B
| id
stringlengths 5
122
| last_modified
null | tags
sequencelengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
|
---|---|---|---|---|---|---|---|---|
null | null |
# Model Name
Description of the model, its uses, and any important information.
## Model Details
- Training data
- Training procedure
- Intended use and limitations
- Citation details
| {} | Kurkur99/mistral99 | null | [
"region:us"
] | null | 2024-05-02T12:43:02+00:00 |
null | transformers | {} | Rasi1610/Deathce502_series3_m7 | null | [
"transformers",
"pytorch",
"vision-encoder-decoder",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T12:43:32+00:00 |
|
text-generation | transformers |
# Uploaded model
- **Developed by:** waylandzhang
- **License:** apache-2.0
- **Finetuned from model :** Llama-3-8b-Chinese-Roleplay-4bit-lesson-v0.3
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "Llama-3-8b-Chinese-Roleplay-4bit-lesson-v0.3"} | waylandzhang/Llama-3-8b-Chinese-Roleplay-4bit-lesson-v0.3 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"en",
"base_model:Llama-3-8b-Chinese-Roleplay-4bit-lesson-v0.3",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"region:us"
] | null | 2024-05-02T12:44:18+00:00 |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | MSey/tiny_BROLLLT_0001.1p | null | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T12:45:11+00:00 |
null | null | {} | Bitoyyy/Bitoyyyyyyyy | null | [
"region:us"
] | null | 2024-05-02T12:46:06+00:00 |
|
null | null | {"license": "llama3"} | MaziyarPanahi/Llama-3-8B-Instruct-DPO-v0.4-GGUF | null | [
"license:llama3",
"region:us"
] | null | 2024-05-02T12:46:43+00:00 |
|
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | SotirisLegkas/value_multi | null | [
"transformers",
"safetensors",
"roberta",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T12:47:43+00:00 |
null | null | {"license": "mit"} | thevidoja/pddl-embeddings | null | [
"license:mit",
"region:us"
] | null | 2024-05-02T12:48:10+00:00 |
|
null | null | {} | haytamelouarrat/dqn-SpaceInvadersNoFrameskip-v4 | null | [
"region:us"
] | null | 2024-05-02T12:48:12+00:00 |
|
null | peft |
# Model Card for Model ID
Fine tuned based on Llama 2 7B to test ludwig QLoRA.
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
### Framework versions
- PEFT 0.10.0 | {"library_name": "peft", "base_model": "meta-llama/Llama-2-7b-hf"} | advaitkale/mdguc1-ludwig | null | [
"peft",
"safetensors",
"base_model:meta-llama/Llama-2-7b-hf",
"region:us"
] | null | 2024-05-02T12:48:53+00:00 |
text-generation | transformers | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
deepseek-coder-33b-instruct - bnb 4bits
- Model creator: https://huggingface.co/deepseek-ai/
- Original model: https://huggingface.co/deepseek-ai/deepseek-coder-33b-instruct/
Original model description:
---
license: other
license_name: deepseek
license_link: LICENSE
---
<p align="center">
<img width="1000px" alt="DeepSeek Coder" src="https://github.com/deepseek-ai/DeepSeek-Coder/blob/main/pictures/logo.png?raw=true">
</p>
<p align="center"><a href="https://www.deepseek.com/">[🏠Homepage]</a> | <a href="https://coder.deepseek.com/">[🤖 Chat with DeepSeek Coder]</a> | <a href="https://discord.gg/Tc7c45Zzu5">[Discord]</a> | <a href="https://github.com/guoday/assert/blob/main/QR.png?raw=true">[Wechat(微信)]</a> </p>
<hr>
### 1. Introduction of Deepseek Coder
Deepseek Coder is composed of a series of code language models, each trained from scratch on 2T tokens, with a composition of 87% code and 13% natural language in both English and Chinese. We provide various sizes of the code model, ranging from 1B to 33B versions. Each model is pre-trained on project-level code corpus by employing a window size of 16K and a extra fill-in-the-blank task, to support project-level code completion and infilling. For coding capabilities, Deepseek Coder achieves state-of-the-art performance among open-source code models on multiple programming languages and various benchmarks.
- **Massive Training Data**: Trained from scratch on 2T tokens, including 87% code and 13% linguistic data in both English and Chinese languages.
- **Highly Flexible & Scalable**: Offered in model sizes of 1.3B, 5.7B, 6.7B, and 33B, enabling users to choose the setup most suitable for their requirements.
- **Superior Model Performance**: State-of-the-art performance among publicly available code models on HumanEval, MultiPL-E, MBPP, DS-1000, and APPS benchmarks.
- **Advanced Code Completion Capabilities**: A window size of 16K and a fill-in-the-blank task, supporting project-level code completion and infilling tasks.
### 2. Model Summary
deepseek-coder-33b-instruct is a 33B parameter model initialized from deepseek-coder-33b-base and fine-tuned on 2B tokens of instruction data.
- **Home Page:** [DeepSeek](https://deepseek.com/)
- **Repository:** [deepseek-ai/deepseek-coder](https://github.com/deepseek-ai/deepseek-coder)
- **Chat With DeepSeek Coder:** [DeepSeek-Coder](https://coder.deepseek.com/)
### 3. How to Use
Here give some examples of how to use our model.
#### Chat Model Inference
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("deepseek-ai/deepseek-coder-6.7b-instruct", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("deepseek-ai/deepseek-coder-6.7b-instruct", trust_remote_code=True, torch_dtype=torch.bfloat16).cuda()
messages=[
{ 'role': 'user', 'content': "write a quick sort algorithm in python."}
]
inputs = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt").to(model.device)
# tokenizer.eos_token_id is the id of <|EOT|> token
outputs = model.generate(inputs, max_new_tokens=512, do_sample=False, top_k=50, top_p=0.95, num_return_sequences=1, eos_token_id=tokenizer.eos_token_id)
print(tokenizer.decode(outputs[0][len(inputs[0]):], skip_special_tokens=True))
```
### 4. License
This code repository is licensed under the MIT License. The use of DeepSeek Coder models is subject to the Model License. DeepSeek Coder supports commercial use.
See the [LICENSE-MODEL](https://github.com/deepseek-ai/deepseek-coder/blob/main/LICENSE-MODEL) for more details.
### 5. Contact
If you have any questions, please raise an issue or contact us at [[email protected]](mailto:[email protected]).
| {} | RichardErkhov/deepseek-ai_-_deepseek-coder-33b-instruct-4bits | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | null | 2024-05-02T12:48:57+00:00 |
null | null | {} | hussamsal/toke_5 | null | [
"region:us"
] | null | 2024-05-02T12:49:09+00:00 |
|
null | null | {} | mewsaa/SehatRasta | null | [
"region:us"
] | null | 2024-05-02T12:49:28+00:00 |
|
null | null | {} | AkshayPM/any_unit_to_gram | null | [
"region:us"
] | null | 2024-05-02T12:49:45+00:00 |
|
feature-extraction | transformers | # CamemBERTa-L10
This model is a pruned version of the pre-trained [CamemBERTa](https://huggingface.co/almanach/camemberta-base) checkpoint, obtained by [dropping the top-layers](https://doi.org/10.48550/arXiv.2004.03844) from the original model.
## Usage
You can use the raw model for masked language modeling (MLM), but it's mostly intended to be fine-tuned on a downstream task, especially one that uses the whole sentence to make decisions such as text classification, extractive question answering, or semantic search. For tasks such as text generation, you should look at autoregressive models like [BelGPT-2](https://huggingface.co/antoinelouis/belgpt2).
You can use this model directly with a pipeline for [masked language modeling](https://huggingface.co/tasks/fill-mask):
```python
from transformers import pipeline
unmasker = pipeline('fill-mask', model='antoinelouis/camemberta-L10')
unmasker("Bonjour, je suis un [MASK] modèle.")
```
You can also use this model to [extract the features](https://huggingface.co/tasks/feature-extraction) of a given text:
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained('antoinelouis/camemberta-L10')
model = AutoModel.from_pretrained('antoinelouis/camemberta-L10')
text = "Remplacez-moi par le texte de votre choix."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
## Variations
CamemBERTa has originally been released in a base (112M) version. The following checkpoints prune the base variation by dropping the top 2, 4, 6, 8, and 10 pretrained encoding layers, respectively.
| Model | #Params | Size | Pruning |
|----------------------------------------------------------------------|:-------:|:-----:|:-------:|
| [CamemBERTa-base](https://huggingface.co/almanach/camemberta-base) | 111.8M | 447MB | - |
| | | | |
| [CamemBERTa-L10](https://huggingface.co/antoinelouis/camemberta-L10) | 97.6M | 386MB | -14% |
| [CamemBERTa-L8](https://huggingface.co/antoinelouis/camemberta-L8) | 83.5M | 334MB | -25% |
| [CamemBERTa-L6](https://huggingface.co/antoinelouis/camemberta-L6) | 69.3M | 277MB | -38% |
| [CamemBERTa-L4](https://huggingface.co/antoinelouis/camemberta-L4) | 55.1M | 220MB | -51% |
| [CamemBERTa-L2](https://huggingface.co/antoinelouis/camemberta-L2) | 40.9M | 164MB | -63% | | {"language": ["fr"], "license": "mit", "library_name": "transformers", "inference": false, "pipeline_tag": "feature-extraction"} | antoinelouis/camemberta-L10 | null | [
"transformers",
"safetensors",
"deberta-v2",
"feature-extraction",
"fr",
"license:mit",
"region:us"
] | null | 2024-05-02T12:50:44+00:00 |
text-generation | transformers |
This is a copy of 'Qwen/Qwen-VL-Chat-Int4' but with the possibility to pass the image tensors as a parameter, not downloading them from internet. model.forward(**inputs) -> model.forward(**inputs, image_list) | {"license": "apache-2.0"} | giobin/Qwen-VL-Chat-Int4-fromImageList | null | [
"transformers",
"safetensors",
"qwen",
"text-generation",
"custom_code",
"license:apache-2.0",
"autotrain_compatible",
"4-bit",
"region:us"
] | null | 2024-05-02T12:51:22+00:00 |
unconditional-image-generation | diffusers |
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation on CIFAR10.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('efekankavalci/ddpm-cifar10-unconditional')
image = pipeline().images[0]
image
```
| {"license": "mit", "tags": ["pytorch", "diffusers", "unconditional-image-generation", "diffusion-models-class"]} | efekankavalci/ddpm-cifar10-unconditional | null | [
"diffusers",
"safetensors",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] | null | 2024-05-02T12:52:26+00:00 |
text-generation | null |
## Llamacpp imatrix Quantizations of Awanllm-Llama-3-8B-Instruct-ORPO-v0.1
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b2777">b2777</a> for quantization.
Original model: https://huggingface.co/AwanLLM/Awanllm-Llama-3-8B-Instruct-ORPO-v0.1
All quants made using imatrix option with dataset provided by Kalomaze [here](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384)
## Prompt format
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>
{prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
<|eot_id|>
```
## Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Awanllm-Llama-3-8B-Instruct-ORPO-v0.1-Q8_0.gguf](https://huggingface.co/bartowski/Awanllm-Llama-3-8B-Instruct-ORPO-v0.1-GGUF/blob/main/Awanllm-Llama-3-8B-Instruct-ORPO-v0.1-Q8_0.gguf) | Q8_0 | 8.54GB | Extremely high quality, generally unneeded but max available quant. |
| [Awanllm-Llama-3-8B-Instruct-ORPO-v0.1-Q6_K.gguf](https://huggingface.co/bartowski/Awanllm-Llama-3-8B-Instruct-ORPO-v0.1-GGUF/blob/main/Awanllm-Llama-3-8B-Instruct-ORPO-v0.1-Q6_K.gguf) | Q6_K | 6.59GB | Very high quality, near perfect, *recommended*. |
| [Awanllm-Llama-3-8B-Instruct-ORPO-v0.1-Q5_K_M.gguf](https://huggingface.co/bartowski/Awanllm-Llama-3-8B-Instruct-ORPO-v0.1-GGUF/blob/main/Awanllm-Llama-3-8B-Instruct-ORPO-v0.1-Q5_K_M.gguf) | Q5_K_M | 5.73GB | High quality, *recommended*. |
| [Awanllm-Llama-3-8B-Instruct-ORPO-v0.1-Q5_K_S.gguf](https://huggingface.co/bartowski/Awanllm-Llama-3-8B-Instruct-ORPO-v0.1-GGUF/blob/main/Awanllm-Llama-3-8B-Instruct-ORPO-v0.1-Q5_K_S.gguf) | Q5_K_S | 5.59GB | High quality, *recommended*. |
| [Awanllm-Llama-3-8B-Instruct-ORPO-v0.1-Q4_K_M.gguf](https://huggingface.co/bartowski/Awanllm-Llama-3-8B-Instruct-ORPO-v0.1-GGUF/blob/main/Awanllm-Llama-3-8B-Instruct-ORPO-v0.1-Q4_K_M.gguf) | Q4_K_M | 4.92GB | Good quality, uses about 4.83 bits per weight, *recommended*. |
| [Awanllm-Llama-3-8B-Instruct-ORPO-v0.1-Q4_K_S.gguf](https://huggingface.co/bartowski/Awanllm-Llama-3-8B-Instruct-ORPO-v0.1-GGUF/blob/main/Awanllm-Llama-3-8B-Instruct-ORPO-v0.1-Q4_K_S.gguf) | Q4_K_S | 4.69GB | Slightly lower quality with more space savings, *recommended*. |
| [Awanllm-Llama-3-8B-Instruct-ORPO-v0.1-IQ4_NL.gguf](https://huggingface.co/bartowski/Awanllm-Llama-3-8B-Instruct-ORPO-v0.1-GGUF/blob/main/Awanllm-Llama-3-8B-Instruct-ORPO-v0.1-IQ4_NL.gguf) | IQ4_NL | 4.67GB | Decent quality, slightly smaller than Q4_K_S with similar performance *recommended*. |
| [Awanllm-Llama-3-8B-Instruct-ORPO-v0.1-IQ4_XS.gguf](https://huggingface.co/bartowski/Awanllm-Llama-3-8B-Instruct-ORPO-v0.1-GGUF/blob/main/Awanllm-Llama-3-8B-Instruct-ORPO-v0.1-IQ4_XS.gguf) | IQ4_XS | 4.44GB | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
| [Awanllm-Llama-3-8B-Instruct-ORPO-v0.1-Q3_K_L.gguf](https://huggingface.co/bartowski/Awanllm-Llama-3-8B-Instruct-ORPO-v0.1-GGUF/blob/main/Awanllm-Llama-3-8B-Instruct-ORPO-v0.1-Q3_K_L.gguf) | Q3_K_L | 4.32GB | Lower quality but usable, good for low RAM availability. |
| [Awanllm-Llama-3-8B-Instruct-ORPO-v0.1-Q3_K_M.gguf](https://huggingface.co/bartowski/Awanllm-Llama-3-8B-Instruct-ORPO-v0.1-GGUF/blob/main/Awanllm-Llama-3-8B-Instruct-ORPO-v0.1-Q3_K_M.gguf) | Q3_K_M | 4.01GB | Even lower quality. |
| [Awanllm-Llama-3-8B-Instruct-ORPO-v0.1-IQ3_M.gguf](https://huggingface.co/bartowski/Awanllm-Llama-3-8B-Instruct-ORPO-v0.1-GGUF/blob/main/Awanllm-Llama-3-8B-Instruct-ORPO-v0.1-IQ3_M.gguf) | IQ3_M | 3.78GB | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| [Awanllm-Llama-3-8B-Instruct-ORPO-v0.1-IQ3_S.gguf](https://huggingface.co/bartowski/Awanllm-Llama-3-8B-Instruct-ORPO-v0.1-GGUF/blob/main/Awanllm-Llama-3-8B-Instruct-ORPO-v0.1-IQ3_S.gguf) | IQ3_S | 3.68GB | Lower quality, new method with decent performance, recommended over Q3_K_S quant, same size with better performance. |
| [Awanllm-Llama-3-8B-Instruct-ORPO-v0.1-Q3_K_S.gguf](https://huggingface.co/bartowski/Awanllm-Llama-3-8B-Instruct-ORPO-v0.1-GGUF/blob/main/Awanllm-Llama-3-8B-Instruct-ORPO-v0.1-Q3_K_S.gguf) | Q3_K_S | 3.66GB | Low quality, not recommended. |
| [Awanllm-Llama-3-8B-Instruct-ORPO-v0.1-IQ3_XS.gguf](https://huggingface.co/bartowski/Awanllm-Llama-3-8B-Instruct-ORPO-v0.1-GGUF/blob/main/Awanllm-Llama-3-8B-Instruct-ORPO-v0.1-IQ3_XS.gguf) | IQ3_XS | 3.51GB | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| [Awanllm-Llama-3-8B-Instruct-ORPO-v0.1-IQ3_XXS.gguf](https://huggingface.co/bartowski/Awanllm-Llama-3-8B-Instruct-ORPO-v0.1-GGUF/blob/main/Awanllm-Llama-3-8B-Instruct-ORPO-v0.1-IQ3_XXS.gguf) | IQ3_XXS | 3.27GB | Lower quality, new method with decent performance, comparable to Q3 quants. |
| [Awanllm-Llama-3-8B-Instruct-ORPO-v0.1-Q2_K.gguf](https://huggingface.co/bartowski/Awanllm-Llama-3-8B-Instruct-ORPO-v0.1-GGUF/blob/main/Awanllm-Llama-3-8B-Instruct-ORPO-v0.1-Q2_K.gguf) | Q2_K | 3.17GB | Very low quality but surprisingly usable. |
| [Awanllm-Llama-3-8B-Instruct-ORPO-v0.1-IQ2_M.gguf](https://huggingface.co/bartowski/Awanllm-Llama-3-8B-Instruct-ORPO-v0.1-GGUF/blob/main/Awanllm-Llama-3-8B-Instruct-ORPO-v0.1-IQ2_M.gguf) | IQ2_M | 2.94GB | Very low quality, uses SOTA techniques to also be surprisingly usable. |
| [Awanllm-Llama-3-8B-Instruct-ORPO-v0.1-IQ2_S.gguf](https://huggingface.co/bartowski/Awanllm-Llama-3-8B-Instruct-ORPO-v0.1-GGUF/blob/main/Awanllm-Llama-3-8B-Instruct-ORPO-v0.1-IQ2_S.gguf) | IQ2_S | 2.75GB | Very low quality, uses SOTA techniques to be usable. |
| [Awanllm-Llama-3-8B-Instruct-ORPO-v0.1-IQ2_XS.gguf](https://huggingface.co/bartowski/Awanllm-Llama-3-8B-Instruct-ORPO-v0.1-GGUF/blob/main/Awanllm-Llama-3-8B-Instruct-ORPO-v0.1-IQ2_XS.gguf) | IQ2_XS | 2.60GB | Very low quality, uses SOTA techniques to be usable. |
| [Awanllm-Llama-3-8B-Instruct-ORPO-v0.1-IQ2_XXS.gguf](https://huggingface.co/bartowski/Awanllm-Llama-3-8B-Instruct-ORPO-v0.1-GGUF/blob/main/Awanllm-Llama-3-8B-Instruct-ORPO-v0.1-IQ2_XXS.gguf) | IQ2_XXS | 2.39GB | Lower quality, uses SOTA techniques to be usable. |
| [Awanllm-Llama-3-8B-Instruct-ORPO-v0.1-IQ1_M.gguf](https://huggingface.co/bartowski/Awanllm-Llama-3-8B-Instruct-ORPO-v0.1-GGUF/blob/main/Awanllm-Llama-3-8B-Instruct-ORPO-v0.1-IQ1_M.gguf) | IQ1_M | 2.16GB | Extremely low quality, *not* recommended. |
| [Awanllm-Llama-3-8B-Instruct-ORPO-v0.1-IQ1_S.gguf](https://huggingface.co/bartowski/Awanllm-Llama-3-8B-Instruct-ORPO-v0.1-GGUF/blob/main/Awanllm-Llama-3-8B-Instruct-ORPO-v0.1-IQ1_S.gguf) | IQ1_S | 2.01GB | Extremely low quality, *not* recommended. |
## Which file should I choose?
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
[llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm.
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
| {"license": "llama3", "quantized_by": "bartowski", "pipeline_tag": "text-generation"} | bartowski/Awanllm-Llama-3-8B-Instruct-ORPO-v0.1-GGUF | null | [
"gguf",
"text-generation",
"license:llama3",
"region:us"
] | null | 2024-05-02T12:52:41+00:00 |
null | null | {"license": "openrail"} | SimplCup/MsBigsausage | null | [
"license:openrail",
"region:us"
] | null | 2024-05-02T12:53:45+00:00 |
|
text-generation | transformers |
# Uploaded model
- **Developed by:** walid-iguider
- **License:** cc-by-nc-sa-4.0
- **Finetuned from model :** unsloth/Phi-3-mini-4k-instruct-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) | {"language": ["it"], "license": "cc-by-nc-sa-4.0", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "trl", "sft"], "datasets": ["mchl-labs/stambecco_data_it"], "base_model": "unsloth/Phi-3-mini-4k-instruct-bnb-4bit"} | walid-iguider/Phi-3-mini-4k-instruct-Ita-600 | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"it",
"dataset:mchl-labs/stambecco_data_it",
"base_model:unsloth/Phi-3-mini-4k-instruct-bnb-4bit",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T12:54:37+00:00 |
text-generation | transformers | This is an initial finetune of Llama 3 on the conceptnet "UsedFor" relationships (4000 relationships) | {"language": ["en"], "license": "llama3", "library_name": "transformers", "tags": ["merge", "llama", "unsloth", "trl", "sft"], "datasets": ["vloverar/conceptnet_UsedFor_en_en_mixtral_finetune"]} | EvilScript/Meta-Llama-3-8B-Instruct-conceptnet_UsedFor_en_en | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"dataset:vloverar/conceptnet_UsedFor_en_en_mixtral_finetune",
"license:llama3",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-02T12:56:20+00:00 |
feature-extraction | transformers | # fine-tuned/jina-embeddings-v2-base-en-02052024-jkqyd3174i-webapp_3375412925
## Model Description
fine-tuned/jina-embeddings-v2-base-en-02052024-jkqyd3174i-webapp_3375412925 is a fine-tuned version of jinaai/jina-embeddings-v2-base-en designed for a specific domain.
## Use Case
This model is designed to support various applications in natural language processing and understanding.
## Associated Dataset
This the dataset for this model can be found [**here**](https://huggingface.co/datasets/fine-tuned/fine-tuned/jina-embeddings-v2-base-en-02052024-jkqyd3174i-webapp_3375412925).
## How to Use
This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started:
```python
from transformers import AutoModel, AutoTokenizer
llm_name = "fine-tuned/jina-embeddings-v2-base-en-02052024-jkqyd3174i-webapp_3375412925"
tokenizer = AutoTokenizer.from_pretrained(llm_name)
model = AutoModel.from_pretrained(llm_name, trust_remote_code=True)
tokens = tokenizer("Your text here", return_tensors="pt")
embedding = model(**tokens)
```
| {} | fine-tuned/jina-embeddings-v2-base-en-02052024-jkqyd3174i-webapp_3375412925 | null | [
"transformers",
"safetensors",
"bert",
"feature-extraction",
"custom_code",
"region:us"
] | null | 2024-05-02T12:56:39+00:00 |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | hussamsal/main | null | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T12:56:43+00:00 |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | MSey/test | null | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T12:58:01+00:00 |
null | null | {"license": "openrail"} | odyssey-ai/JuggernautXL_Lightning | null | [
"license:openrail",
"region:us"
] | null | 2024-05-02T12:58:16+00:00 |
|
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | safinal/Llama-3-Persian-8B-LoRA | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-02T12:59:10+00:00 |
text-classification | transformers |
## AfriSenti Hausa Sentiment Regressor Description
Takes a text and predicts the sentiment value between -1 (Negative) to 1 (Positive) with 0 being Neutral.
Regression Value Description:
| Value | Sentiment |
|--|--|
| -1 | Negative |
| 0 | Neutral |
| 1 | Positive |
## How to Get Started with the Model
Use the code below to get started with the model.
```
import math
import torch
import pandas as pd
from transformers import AutoModelForSequenceClassification, AutoTokenizer
BATCH_SIZE = 32
ds = pd.read_csv('test.csv')
BASE_MODEL = 'HausaNLP/afrisenti-hau-regression'
device = 'cuda' if torch.cuda.is_available() else 'cpu'
tokenizer = AutoTokenizer.from_pretrained(BASE_MODEL)
model = AutoModelForSequenceClassification.from_pretrained(BASE_MODEL)
nb_batches = math.ceil(len(ds)/BATCH_SIZE)
y_preds = []
for i in range(nb_batches):
input_texts = ds[i * BATCH_SIZE: (i+1) * BATCH_SIZE]["tweet"]
encoded = tokenizer(input_texts, truncation=True, padding="max_length", max_length=256, return_tensors="pt").to(device)
y_preds += model(**encoded).logits.reshape(-1).tolist()
df = pd.DataFrame([ds['tweet'], ds['label'], y_preds], ["Text", "Label", "Prediction"]).T
df.to_csv('predictions.csv', index=False)
``` | {"library_name": "transformers", "tags": []} | HausaNLP/afrisenti-hau-regression | null | [
"transformers",
"safetensors",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T12:59:15+00:00 |
null | null | {} | squaadinc/1714654815099x241026231967416320 | null | [
"region:us"
] | null | 2024-05-02T13:00:18+00:00 |
|
reinforcement-learning | stable-baselines3 |
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga emiliomartin84 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga emiliomartin84 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga emiliomartin84
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 10000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
| {"library_name": "stable-baselines3", "tags": ["SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "stable-baselines3"], "model-index": [{"name": "DQN", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "SpaceInvadersNoFrameskip-v4", "type": "SpaceInvadersNoFrameskip-v4"}, "metrics": [{"type": "mean_reward", "value": "287.50 +/- 93.33", "name": "mean_reward", "verified": false}]}]}]} | emiliomartin84/SpaceInvaders | null | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | null | 2024-05-02T13:01:40+00:00 |
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SFTCodeBertbase-mlm-APPS5k
This model is a fine-tuned version of [microsoft/codebert-base-mlm](https://huggingface.co/microsoft/codebert-base-mlm) on the apps dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9251
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 5000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.2308 | 1.61 | 500 | 2.3196 |
| 2.1364 | 3.22 | 1000 | 2.1168 |
| 1.9211 | 4.83 | 1500 | 2.0125 |
| 1.776 | 6.44 | 2000 | 1.9736 |
| 1.6872 | 8.05 | 2500 | 1.9470 |
| 1.6137 | 9.65 | 3000 | 1.9344 |
| 1.5724 | 11.26 | 3500 | 1.9288 |
| 1.533 | 12.87 | 4000 | 1.9267 |
| 1.5211 | 14.48 | 4500 | 1.9246 |
| 1.5115 | 16.09 | 5000 | 1.9251 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"tags": ["trl", "sft", "generated_from_trainer"], "datasets": ["apps"], "base_model": "microsoft/codebert-base-mlm", "model-index": [{"name": "SFTCodeBertbase-mlm-APPS5k", "results": []}]} | AdnanRiaz107/SFTCodeBertbase-mlm-APPS5k | null | [
"transformers",
"safetensors",
"roberta",
"text-generation",
"trl",
"sft",
"generated_from_trainer",
"dataset:apps",
"base_model:microsoft/codebert-base-mlm",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T13:01:56+00:00 |
null | null | {} | Salii/sokoban | null | [
"region:us"
] | null | 2024-05-02T13:03:02+00:00 |
|
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | OwOpeepeepoopoo/herewegoagain4 | null | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T13:04:05+00:00 |
text-generation | transformers | # Model Card
## Summary
This model was trained using [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio).
- Base model: [NousResearch/Hermes-2-Pro-Llama-3-8B](https://huggingface.co/NousResearch/Hermes-2-Pro-Llama-3-8B)
## Usage
To use the model with the `transformers` library on a machine with GPUs, first make sure you have the `transformers` library installed.
```bash
pip install transformers==4.40.1
```
Also make sure you are providing your huggingface token to the pipeline if the model is lying in a private repo.
- Either leave `token=True` in the `pipeline` and login to hugginface_hub by running
```python
import huggingface_hub
huggingface_hub.login(<ACCESS_TOKEN>)
```
- Or directly pass your <ACCESS_TOKEN> to `token` in the `pipeline`
```python
from transformers import pipeline
generate_text = pipeline(
model="mwalol/stalwart-catfish-classifier-full",
torch_dtype="auto",
trust_remote_code=True,
use_fast=True,
device_map={"": "cuda:0"},
token=True,
)
# generate configuration can be modified to your needs
# generate_text.model.generation_config.min_new_tokens = 2
# generate_text.model.generation_config.max_new_tokens = 1
# generate_text.model.generation_config.do_sample = False
# generate_text.model.generation_config.num_beams = 1
# generate_text.model.generation_config.temperature = float(0.0)
# generate_text.model.generation_config.repetition_penalty = float(1.0)
res = generate_text(
"Why is drinking water so healthy?",
renormalize_logits=True
)
print(res[0]["generated_text"])
```
You can print a sample prompt after the preprocessing step to see how it is feed to the tokenizer:
```python
print(generate_text.preprocess("Why is drinking water so healthy?")["prompt_text"])
```
```bash
<|prompt|>Why is drinking water so healthy?<|im_end|><|answer|>
```
Alternatively, you can download [h2oai_pipeline.py](h2oai_pipeline.py), store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer. If the model and the tokenizer are fully supported in the `transformers` package, this will allow you to set `trust_remote_code=False`.
```python
from h2oai_pipeline import H2OTextGenerationPipeline
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(
"mwalol/stalwart-catfish-classifier-full",
use_fast=True,
padding_side="left",
trust_remote_code=True,
)
model = AutoModelForCausalLM.from_pretrained(
"mwalol/stalwart-catfish-classifier-full",
torch_dtype="auto",
device_map={"": "cuda:0"},
trust_remote_code=True,
)
generate_text = H2OTextGenerationPipeline(model=model, tokenizer=tokenizer)
# generate configuration can be modified to your needs
# generate_text.model.generation_config.min_new_tokens = 2
# generate_text.model.generation_config.max_new_tokens = 1
# generate_text.model.generation_config.do_sample = False
# generate_text.model.generation_config.num_beams = 1
# generate_text.model.generation_config.temperature = float(0.0)
# generate_text.model.generation_config.repetition_penalty = float(1.0)
res = generate_text(
"Why is drinking water so healthy?",
renormalize_logits=True
)
print(res[0]["generated_text"])
```
You may also construct the pipeline from the loaded model and tokenizer yourself and consider the preprocessing steps:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "mwalol/stalwart-catfish-classifier-full" # either local folder or huggingface model name
# Important: The prompt needs to be in the same format the model was trained with.
# You can find an example prompt in the experiment logs.
prompt = "<|prompt|>How are you?<|im_end|><|answer|>"
tokenizer = AutoTokenizer.from_pretrained(
model_name,
use_fast=True,
trust_remote_code=True,
)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map={"": "cuda:0"},
trust_remote_code=True,
)
model.cuda().eval()
inputs = tokenizer(prompt, return_tensors="pt", add_special_tokens=False).to("cuda")
# generate configuration can be modified to your needs
# model.generation_config.min_new_tokens = 2
# model.generation_config.max_new_tokens = 1
# model.generation_config.do_sample = False
# model.generation_config.num_beams = 1
# model.generation_config.temperature = float(0.0)
# model.generation_config.repetition_penalty = float(1.0)
tokens = model.generate(
input_ids=inputs["input_ids"],
attention_mask=inputs["attention_mask"],
renormalize_logits=True
)[0]
tokens = tokens[inputs["input_ids"].shape[1]:]
answer = tokenizer.decode(tokens, skip_special_tokens=True)
print(answer)
```
## Quantization and sharding
You can load the models using quantization by specifying ```load_in_8bit=True``` or ```load_in_4bit=True```. Also, sharding on multiple GPUs is possible by setting ```device_map=auto```.
## Model Architecture
```
LlamaForCausalLM(
(model): LlamaModel(
(embed_tokens): Embedding(128288, 4096, padding_idx=128001)
(layers): ModuleList(
(0-31): 32 x LlamaDecoderLayer(
(self_attn): LlamaFlashAttention2(
(q_proj): Linear(in_features=4096, out_features=4096, bias=False)
(k_proj): Linear(in_features=4096, out_features=1024, bias=False)
(v_proj): Linear(in_features=4096, out_features=1024, bias=False)
(o_proj): Linear(in_features=4096, out_features=4096, bias=False)
(rotary_emb): LlamaRotaryEmbedding()
)
(mlp): LlamaMLP(
(gate_proj): Linear(in_features=4096, out_features=14336, bias=False)
(up_proj): Linear(in_features=4096, out_features=14336, bias=False)
(down_proj): Linear(in_features=14336, out_features=4096, bias=False)
(act_fn): SiLU()
)
(input_layernorm): LlamaRMSNorm()
(post_attention_layernorm): LlamaRMSNorm()
)
)
(norm): LlamaRMSNorm()
)
(lm_head): Linear(in_features=4096, out_features=128288, bias=False)
)
```
## Model Configuration
This model was trained using H2O LLM Studio and with the configuration in [cfg.yaml](cfg.yaml). Visit [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio) to learn how to train your own large language models.
## Disclaimer
Please read this disclaimer carefully before using the large language model provided in this repository. Your use of the model signifies your agreement to the following terms and conditions.
- Biases and Offensiveness: The large language model is trained on a diverse range of internet text data, which may contain biased, racist, offensive, or otherwise inappropriate content. By using this model, you acknowledge and accept that the generated content may sometimes exhibit biases or produce content that is offensive or inappropriate. The developers of this repository do not endorse, support, or promote any such content or viewpoints.
- Limitations: The large language model is an AI-based tool and not a human. It may produce incorrect, nonsensical, or irrelevant responses. It is the user's responsibility to critically evaluate the generated content and use it at their discretion.
- Use at Your Own Risk: Users of this large language model must assume full responsibility for any consequences that may arise from their use of the tool. The developers and contributors of this repository shall not be held liable for any damages, losses, or harm resulting from the use or misuse of the provided model.
- Ethical Considerations: Users are encouraged to use the large language model responsibly and ethically. By using this model, you agree not to use it for purposes that promote hate speech, discrimination, harassment, or any form of illegal or harmful activities.
- Reporting Issues: If you encounter any biased, offensive, or otherwise inappropriate content generated by the large language model, please report it to the repository maintainers through the provided channels. Your feedback will help improve the model and mitigate potential issues.
- Changes to this Disclaimer: The developers of this repository reserve the right to modify or update this disclaimer at any time without prior notice. It is the user's responsibility to periodically review the disclaimer to stay informed about any changes.
By using the large language model provided in this repository, you agree to accept and comply with the terms and conditions outlined in this disclaimer. If you do not agree with any part of this disclaimer, you should refrain from using the model and any content generated by it. | {"language": ["en"], "library_name": "transformers", "tags": ["gpt", "llm", "large language model", "h2o-llmstudio"], "inference": false, "thumbnail": "https://h2o.ai/etc.clientlibs/h2o/clientlibs/clientlib-site/resources/images/favicon.ico"} | mwalol/stalwart-catfish-classifier-full | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"gpt",
"llm",
"large language model",
"h2o-llmstudio",
"conversational",
"en",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-02T13:04:41+00:00 |
question-answering | transformers | {} | mondol007/albert-base-v2-finetuned-squad | null | [
"transformers",
"tensorboard",
"safetensors",
"albert",
"question-answering",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T13:04:47+00:00 |
|
null | transformers | {} | toth235a/mask2former-swin-large-crack-semantic | null | [
"transformers",
"safetensors",
"mask2former",
"endpoints_compatible",
"region:us",
"has_space"
] | null | 2024-05-02T13:04:52+00:00 |
|
text-generation | transformers | <!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/CP4VSgck)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with llm-int8.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo NousResearch/Hermes-2-Pro-Llama-3-8B installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install transformers accelerate bitsandbytes>0.37.0
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("PrunaAI/NousResearch-Hermes-2-Pro-Llama-3-8B-bnb-4bit-smashed", trust_remote_code=True, device_map='auto')
tokenizer = AutoTokenizer.from_pretrained("NousResearch/Hermes-2-Pro-Llama-3-8B")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model NousResearch/Hermes-2-Pro-Llama-3-8B before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). | {"tags": ["pruna-ai"], "metrics": ["memory_disk", "memory_inference", "inference_latency", "inference_throughput", "inference_CO2_emissions", "inference_energy_consumption"], "thumbnail": "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg", "base_model": "NousResearch/Hermes-2-Pro-Llama-3-8B"} | PrunaAI/NousResearch-Hermes-2-Pro-Llama-3-8B-bnb-4bit-smashed | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"pruna-ai",
"conversational",
"base_model:NousResearch/Hermes-2-Pro-Llama-3-8B",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | null | 2024-05-02T13:05:06+00:00 |
null | null | {} | samzirbo/mT5.baseline.bf16 | null | [
"region:us"
] | null | 2024-05-02T13:06:40+00:00 |
|
null | null |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-ner-demo
This model is a fine-tuned version of [bayartsogt/mongolian-roberta-base](https://huggingface.co/bayartsogt/mongolian-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1566
- Precision: 0.6857
- Recall: 0.7725
- F1: 0.7265
- Accuracy: 0.9453
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.9745 | 1.0 | 477 | 0.5080 | 0.2164 | 0.1205 | 0.1548 | 0.8187 |
| 0.425 | 2.0 | 954 | 0.3128 | 0.5213 | 0.5929 | 0.5548 | 0.9038 |
| 0.2943 | 3.0 | 1431 | 0.2337 | 0.5905 | 0.6781 | 0.6313 | 0.9237 |
| 0.2393 | 4.0 | 1908 | 0.2000 | 0.6303 | 0.7224 | 0.6732 | 0.9333 |
| 0.2134 | 5.0 | 2385 | 0.1813 | 0.6526 | 0.7434 | 0.6951 | 0.9384 |
| 0.1978 | 6.0 | 2862 | 0.1704 | 0.6629 | 0.7527 | 0.7050 | 0.9412 |
| 0.1885 | 7.0 | 3339 | 0.1647 | 0.6737 | 0.7625 | 0.7154 | 0.9429 |
| 0.1823 | 8.0 | 3816 | 0.1595 | 0.6816 | 0.7680 | 0.7222 | 0.9443 |
| 0.1792 | 9.0 | 4293 | 0.1576 | 0.6843 | 0.7713 | 0.7252 | 0.9451 |
| 0.1778 | 10.0 | 4770 | 0.1566 | 0.6857 | 0.7725 | 0.7265 | 0.9453 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"language": ["mn"], "tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "base_model": "bayartsogt/mongolian-roberta-base", "model-index": [{"name": "roberta-base-ner-demo", "results": []}]} | munkhdelger1/roberta-base-ner-demo | null | [
"tensorboard",
"safetensors",
"generated_from_trainer",
"mn",
"base_model:bayartsogt/mongolian-roberta-base",
"region:us"
] | null | 2024-05-02T13:07:01+00:00 |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | theGhoul21/OrpoMistral-8B-SRL | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-02T13:07:41+00:00 |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral-finetune
This model is a fine-tuned version of [TheBloke/Mistral-7B-Instruct-v0.2-GPTQ](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.2-GPTQ) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6845
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.2887 | 0.9231 | 3 | 1.9163 |
| 2.21 | 1.8462 | 6 | 1.8534 |
| 2.1457 | 2.7692 | 9 | 1.8140 |
| 1.5818 | 4.0 | 13 | 1.7767 |
| 2.0802 | 4.9231 | 16 | 1.7466 |
| 2.0341 | 5.8462 | 19 | 1.7224 |
| 2.0253 | 6.7692 | 22 | 1.7043 |
| 1.4828 | 8.0 | 26 | 1.6902 |
| 1.9755 | 8.9231 | 29 | 1.6851 |
| 1.3922 | 9.2308 | 30 | 1.6845 |
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.1
- Pytorch 2.1.0+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1 | {"license": "apache-2.0", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "TheBloke/Mistral-7B-Instruct-v0.2-GPTQ", "model-index": [{"name": "mistral-finetune", "results": []}]} | yo25/mistral-finetune | null | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:TheBloke/Mistral-7B-Instruct-v0.2-GPTQ",
"license:apache-2.0",
"region:us"
] | null | 2024-05-02T13:08:10+00:00 |
feature-extraction | transformers | {} | thomovich/sentence_transforthp | null | [
"transformers",
"onnx",
"bert",
"feature-extraction",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T13:08:51+00:00 |
|
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral-7b-sft-lora-ha
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3798
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.2779 | 1.0 | 15 | 1.9971 |
| 1.8274 | 2.0 | 30 | 1.6368 |
| 1.5549 | 3.0 | 45 | 1.4508 |
| 1.4289 | 4.0 | 60 | 1.3850 |
| 1.3983 | 5.0 | 75 | 1.3798 |
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1 | {"license": "apache-2.0", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "datasets": ["generator", "HachiML/Hachi-Alpaca-Mixtral-8x22B-Instruct-v0.1"], "base_model": "mistralai/Mistral-7B-Instruct-v0.2", "model-index": [{"name": "mistral-7b-sft-lora-ha", "results": []}]} | HachiML/mistral-7b-sft-lora-ha-v0.2_cleaned | null | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"dataset:HachiML/Hachi-Alpaca-Mixtral-8x22B-Instruct-v0.1",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"license:apache-2.0",
"region:us"
] | null | 2024-05-02T13:09:31+00:00 |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | akshay-nambiar/Mistral-7B-Instruct-v0.8-custom-with-steps | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-02T13:10:27+00:00 |
text-classification | transformers | {} | jobanpreet123/sentiment-spanish | null | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T13:10:47+00:00 |
|
null | null | {} | oliverchau/burqa_sdxl | null | [
"region:us"
] | null | 2024-05-02T13:10:51+00:00 |
|
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# outputs
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2
- training_steps: 10
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.10.1.dev0
- Transformers 4.36.2
- Pytorch 2.3.0+cu121
- Datasets 2.16.0
- Tokenizers 0.15.2 | {"license": "apache-2.0", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "mistralai/Mistral-7B-v0.1", "model-index": [{"name": "outputs", "results": []}]} | pilsneyrouset/outputs | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"region:us"
] | null | 2024-05-02T13:10:55+00:00 |
token-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1348
- F1: 0.8641
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2557 | 1.0 | 525 | 0.1547 | 0.8199 |
| 0.1275 | 2.0 | 1050 | 0.1337 | 0.8525 |
| 0.0793 | 3.0 | 1575 | 0.1348 | 0.8641 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.2+cu121
- Datasets 2.19.0
- Tokenizers 0.15.2
| {"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["f1"], "base_model": "xlm-roberta-base", "model-index": [{"name": "xlm-roberta-base-finetuned-panx-de", "results": []}]} | alexisxiaoyu/xlm-roberta-base-finetuned-panx-de | null | [
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T13:11:12+00:00 |
text-generation | transformers |
Self trained GPT-2 large. Around 770M parameters.
The tokenizer is the one from https://huggingface.co/openai-community/gpt2.
It is being trained on around 400B tokens and this is step 51k.
The evaluation is being conducted now.
## License
This model is available under the Apache 2.0 License. Well, also MIT License. So both should be followed.
## Discord Server
Join our Discord server [here](https://discord.gg/xhcBDEM3).
## Feeling Generous? 😊
Eager to buy me a cup of 2$ coffe or iced tea?🍵☕ Sure, here is the link: [https://ko-fi.com/drnicefellow](https://ko-fi.com/drnicefellow). Please add a note on which one you want me to drink?
| {"license": "apache-2.0"} | DrNicefellow/GPT-2-Large-51k-steps | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-02T13:11:22+00:00 |
null | null | {"license": "mit"} | janeano/empezando | null | [
"license:mit",
"region:us"
] | null | 2024-05-02T13:11:37+00:00 |
|
feature-extraction | transformers | # CamemBERTa-L8
This model is a pruned version of the pre-trained [CamemBERTa](https://huggingface.co/almanach/camemberta-base) checkpoint, obtained by [dropping the top-layers](https://doi.org/10.48550/arXiv.2004.03844) from the original model.
## Usage
You can use the raw model for masked language modeling (MLM), but it's mostly intended to be fine-tuned on a downstream task, especially one that uses the whole sentence to make decisions such as text classification, extractive question answering, or semantic search. For tasks such as text generation, you should look at autoregressive models like [BelGPT-2](https://huggingface.co/antoinelouis/belgpt2).
You can use this model directly with a pipeline for [masked language modeling](https://huggingface.co/tasks/fill-mask):
```python
from transformers import pipeline
unmasker = pipeline('fill-mask', model='antoinelouis/camemberta-L8')
unmasker("Bonjour, je suis un [MASK] modèle.")
```
You can also use this model to [extract the features](https://huggingface.co/tasks/feature-extraction) of a given text:
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained('antoinelouis/camemberta-L8')
model = AutoModel.from_pretrained('antoinelouis/camemberta-L8')
text = "Remplacez-moi par le texte de votre choix."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
## Variations
CamemBERTa has originally been released in a base (112M) version. The following checkpoints prune the base variation by dropping the top 2, 4, 6, 8, and 10 pretrained encoding layers, respectively.
| Model | #Params | Size | Pruning |
|----------------------------------------------------------------------|:-------:|:-----:|:-------:|
| [CamemBERTa-base](https://huggingface.co/almanach/camemberta-base) | 111.8M | 447MB | - |
| | | | |
| [CamemBERTa-L10](https://huggingface.co/antoinelouis/camemberta-L10) | 97.6M | 386MB | -14% |
| [CamemBERTa-L8](https://huggingface.co/antoinelouis/camemberta-L8) | 83.5M | 334MB | -25% |
| [CamemBERTa-L6](https://huggingface.co/antoinelouis/camemberta-L6) | 69.3M | 277MB | -38% |
| [CamemBERTa-L4](https://huggingface.co/antoinelouis/camemberta-L4) | 55.1M | 220MB | -51% |
| [CamemBERTa-L2](https://huggingface.co/antoinelouis/camemberta-L2) | 40.9M | 164MB | -63% | | {"language": ["fr"], "license": "mit", "library_name": "transformers", "inference": false, "pipeline_tag": "feature-extraction"} | antoinelouis/camemberta-L8 | null | [
"transformers",
"safetensors",
"deberta-v2",
"feature-extraction",
"fr",
"license:mit",
"region:us"
] | null | 2024-05-02T13:11:39+00:00 |
feature-extraction | transformers | # CamemBERTa-L6
This model is a pruned version of the pre-trained [CamemBERTa](https://huggingface.co/almanach/camemberta-base) checkpoint, obtained by [dropping the top-layers](https://doi.org/10.48550/arXiv.2004.03844) from the original model.
## Usage
You can use the raw model for masked language modeling (MLM), but it's mostly intended to be fine-tuned on a downstream task, especially one that uses the whole sentence to make decisions such as text classification, extractive question answering, or semantic search. For tasks such as text generation, you should look at autoregressive models like [BelGPT-2](https://huggingface.co/antoinelouis/belgpt2).
You can use this model directly with a pipeline for [masked language modeling](https://huggingface.co/tasks/fill-mask):
```python
from transformers import pipeline
unmasker = pipeline('fill-mask', model='antoinelouis/camemberta-L6')
unmasker("Bonjour, je suis un [MASK] modèle.")
```
You can also use this model to [extract the features](https://huggingface.co/tasks/feature-extraction) of a given text:
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained('antoinelouis/camemberta-L6')
model = AutoModel.from_pretrained('antoinelouis/camemberta-L6')
text = "Remplacez-moi par le texte de votre choix."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
## Variations
CamemBERTa has originally been released in a base (112M) version. The following checkpoints prune the base variation by dropping the top 2, 4, 6, 8, and 10 pretrained encoding layers, respectively.
| Model | #Params | Size | Pruning |
|----------------------------------------------------------------------|:-------:|:-----:|:-------:|
| [CamemBERTa-base](https://huggingface.co/almanach/camemberta-base) | 111.8M | 447MB | - |
| | | | |
| [CamemBERTa-L10](https://huggingface.co/antoinelouis/camemberta-L10) | 97.6M | 386MB | -14% |
| [CamemBERTa-L8](https://huggingface.co/antoinelouis/camemberta-L8) | 83.5M | 334MB | -25% |
| [CamemBERTa-L6](https://huggingface.co/antoinelouis/camemberta-L6) | 69.3M | 277MB | -38% |
| [CamemBERTa-L4](https://huggingface.co/antoinelouis/camemberta-L4) | 55.1M | 220MB | -51% |
| [CamemBERTa-L2](https://huggingface.co/antoinelouis/camemberta-L2) | 40.9M | 164MB | -63% | | {"language": ["fr"], "license": "mit", "library_name": "transformers", "inference": false, "pipeline_tag": "feature-extraction"} | antoinelouis/camemberta-L6 | null | [
"transformers",
"safetensors",
"deberta-v2",
"feature-extraction",
"fr",
"license:mit",
"region:us"
] | null | 2024-05-02T13:12:08+00:00 |
null | null |
# ChimerallamaConfigurable-7B
ChimerallamaConfigurable-7B is an automated merge created by [Maxime Labonne](https://huggingface.co/mlabonne) using the following configuration.
* [mlabonne/ChimeraLlama-3-8B-v3](https://huggingface.co/mlabonne/ChimeraLlama-3-8B-v3)
* [vicgalle/Configurable-Llama-3-8B-v0.2](https://huggingface.co/vicgalle/Configurable-Llama-3-8B-v0.2)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: mlabonne/ChimeraLlama-3-8B-v3
layer_range: [0, 32]
- model: vicgalle/Configurable-Llama-3-8B-v0.2
layer_range: [0, 32]
merge_method: slerp
base_model: mlabonne/ChimeraLlama-3-8B-v3
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
random_seed: 0
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "automerger/ChimerallamaConfigurable-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` | {"license": "apache-2.0", "tags": ["merge", "mergekit", "lazymergekit", "automerger"], "base_model": ["mlabonne/ChimeraLlama-3-8B-v3", "vicgalle/Configurable-Llama-3-8B-v0.2"]} | automerger/ChimerallamaConfigurable-7B | null | [
"merge",
"mergekit",
"lazymergekit",
"automerger",
"base_model:mlabonne/ChimeraLlama-3-8B-v3",
"base_model:vicgalle/Configurable-Llama-3-8B-v0.2",
"license:apache-2.0",
"region:us"
] | null | 2024-05-02T13:12:16+00:00 |
null | peft |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1 | {"library_name": "peft", "base_model": "meta-llama/Meta-Llama-3-8B-Instruct"} | sravaniayyagari/lora_model_3 | null | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"region:us"
] | null | 2024-05-02T13:12:19+00:00 |
text-generation | transformers |
# Uploaded model
- **Developed by:** LeroyDyer
- **License:** apache-2.0
- **Finetuned from model :** LeroyDyer/Mixtral_AI_CyberFriend_1.0
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "trl"], "base_model": "LeroyDyer/Mixtral_AI_CyberFriend_1.0"} | LeroyDyer/Mixtral_AI_CyberUltron_DPO | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"en",
"base_model:LeroyDyer/Mixtral_AI_CyberFriend_1.0",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T13:12:25+00:00 |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | EpicJhon/13-sn6m6 | null | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T13:12:31+00:00 |
feature-extraction | transformers | # CamemBERTa-L4
This model is a pruned version of the pre-trained [CamemBERTa](https://huggingface.co/almanach/camemberta-base) checkpoint, obtained by [dropping the top-layers](https://doi.org/10.48550/arXiv.2004.03844) from the original model.
## Usage
You can use the raw model for masked language modeling (MLM), but it's mostly intended to be fine-tuned on a downstream task, especially one that uses the whole sentence to make decisions such as text classification, extractive question answering, or semantic search. For tasks such as text generation, you should look at autoregressive models like [BelGPT-2](https://huggingface.co/antoinelouis/belgpt2).
You can use this model directly with a pipeline for [masked language modeling](https://huggingface.co/tasks/fill-mask):
```python
from transformers import pipeline
unmasker = pipeline('fill-mask', model='antoinelouis/camemberta-L4')
unmasker("Bonjour, je suis un [MASK] modèle.")
```
You can also use this model to [extract the features](https://huggingface.co/tasks/feature-extraction) of a given text:
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained('antoinelouis/camemberta-L4')
model = AutoModel.from_pretrained('antoinelouis/camemberta-L4')
text = "Remplacez-moi par le texte de votre choix."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
## Variations
CamemBERTa has originally been released in a base (112M) version. The following checkpoints prune the base variation by dropping the top 2, 4, 6, 8, and 10 pretrained encoding layers, respectively.
| Model | #Params | Size | Pruning |
|----------------------------------------------------------------------|:-------:|:-----:|:-------:|
| [CamemBERTa-base](https://huggingface.co/almanach/camemberta-base) | 111.8M | 447MB | - |
| | | | |
| [CamemBERTa-L10](https://huggingface.co/antoinelouis/camemberta-L10) | 97.6M | 386MB | -14% |
| [CamemBERTa-L8](https://huggingface.co/antoinelouis/camemberta-L8) | 83.5M | 334MB | -25% |
| [CamemBERTa-L6](https://huggingface.co/antoinelouis/camemberta-L6) | 69.3M | 277MB | -38% |
| [CamemBERTa-L4](https://huggingface.co/antoinelouis/camemberta-L4) | 55.1M | 220MB | -51% |
| [CamemBERTa-L2](https://huggingface.co/antoinelouis/camemberta-L2) | 40.9M | 164MB | -63% | | {"language": ["fr"], "license": "mit", "library_name": "transformers", "inference": false, "pipeline_tag": "feature-extraction"} | antoinelouis/camemberta-L4 | null | [
"transformers",
"safetensors",
"deberta-v2",
"feature-extraction",
"fr",
"license:mit",
"region:us"
] | null | 2024-05-02T13:12:40+00:00 |
feature-extraction | transformers | # CamemBERTa-L2
This model is a pruned version of the pre-trained [CamemBERTa](https://huggingface.co/almanach/camemberta-base) checkpoint, obtained by [dropping the top-layers](https://doi.org/10.48550/arXiv.2004.03844) from the original model.
## Usage
You can use the raw model for masked language modeling (MLM), but it's mostly intended to be fine-tuned on a downstream task, especially one that uses the whole sentence to make decisions such as text classification, extractive question answering, or semantic search. For tasks such as text generation, you should look at autoregressive models like [BelGPT-2](https://huggingface.co/antoinelouis/belgpt2).
You can use this model directly with a pipeline for [masked language modeling](https://huggingface.co/tasks/fill-mask):
```python
from transformers import pipeline
unmasker = pipeline('fill-mask', model='antoinelouis/camemberta-L2')
unmasker("Bonjour, je suis un [MASK] modèle.")
```
You can also use this model to [extract the features](https://huggingface.co/tasks/feature-extraction) of a given text:
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained('antoinelouis/camemberta-L2')
model = AutoModel.from_pretrained('antoinelouis/camemberta-L2')
text = "Remplacez-moi par le texte de votre choix."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
## Variations
CamemBERTa has originally been released in a base (112M) version. The following checkpoints prune the base variation by dropping the top 2, 4, 6, 8, and 10 pretrained encoding layers, respectively.
| Model | #Params | Size | Pruning |
|----------------------------------------------------------------------|:-------:|:-----:|:-------:|
| [CamemBERTa-base](https://huggingface.co/almanach/camemberta-base) | 111.8M | 447MB | - |
| | | | |
| [CamemBERTa-L10](https://huggingface.co/antoinelouis/camemberta-L10) | 97.6M | 386MB | -14% |
| [CamemBERTa-L8](https://huggingface.co/antoinelouis/camemberta-L8) | 83.5M | 334MB | -25% |
| [CamemBERTa-L6](https://huggingface.co/antoinelouis/camemberta-L6) | 69.3M | 277MB | -38% |
| [CamemBERTa-L4](https://huggingface.co/antoinelouis/camemberta-L4) | 55.1M | 220MB | -51% |
| [CamemBERTa-L2](https://huggingface.co/antoinelouis/camemberta-L2) | 40.9M | 164MB | -63% | | {"language": ["fr"], "license": "mit", "library_name": "transformers", "inference": false, "pipeline_tag": "feature-extraction"} | antoinelouis/camemberta-L2 | null | [
"transformers",
"safetensors",
"deberta-v2",
"feature-extraction",
"fr",
"license:mit",
"region:us"
] | null | 2024-05-02T13:13:01+00:00 |
text2text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5.baseline
This model is a fine-tuned version of [samzirbo/mT5.en-es.pretrained](https://huggingface.co/samzirbo/mT5.en-es.pretrained) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5093
- Bleu: 38.6464
- Meteor: 0.661
- Chrf++: 60.6878
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- training_steps: 30000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Meteor | Chrf++ |
|:-------------:|:------:|:-----:|:---------------:|:-------:|:------:|:-------:|
| 4.0484 | 0.3215 | 3000 | 2.1130 | 29.7312 | 0.5872 | 53.2622 |
| 2.3309 | 0.6431 | 6000 | 1.8472 | 33.4852 | 0.6209 | 56.6127 |
| 2.0987 | 0.9646 | 9000 | 1.7299 | 35.1261 | 0.6355 | 58.0524 |
| 1.9355 | 1.2862 | 12000 | 1.6594 | 36.3851 | 0.6449 | 58.9991 |
| 1.8568 | 1.6077 | 15000 | 1.5978 | 37.0844 | 0.6499 | 59.4457 |
| 1.8039 | 1.9293 | 18000 | 1.5601 | 37.7628 | 0.6562 | 60.145 |
| 1.7271 | 2.2508 | 21000 | 1.5298 | 38.1387 | 0.6572 | 60.3042 |
| 1.6984 | 2.5723 | 24000 | 1.5148 | 38.5117 | 0.66 | 60.5765 |
| 1.6846 | 2.8939 | 27000 | 1.5096 | 38.5563 | 0.6604 | 60.6276 |
| 1.6687 | 3.2154 | 30000 | 1.5093 | 38.6464 | 0.661 | 60.6878 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"tags": ["generated_from_trainer"], "metrics": ["bleu"], "base_model": "samzirbo/mT5.en-es.pretrained", "model-index": [{"name": "mt5.baseline", "results": []}]} | samzirbo/mT5.baseline | null | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"generated_from_trainer",
"base_model:samzirbo/mT5.en-es.pretrained",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-02T13:13:29+00:00 |
text-generation | transformers | <!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/CP4VSgck)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with hqq.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo NousResearch/Hermes-2-Pro-Llama-3-8B installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install hqq
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from hqq.engine.hf import HQQModelForCausalLM
from hqq.models.hf.base import AutoHQQHFModel
try:
model = HQQModelForCausalLM.from_quantized("PrunaAI/NousResearch-Hermes-2-Pro-Llama-3-8B-HQQ-1bit-smashed", device_map='auto')
except:
model = AutoHQQHFModel.from_quantized("PrunaAI/NousResearch-Hermes-2-Pro-Llama-3-8B-HQQ-1bit-smashed")
tokenizer = AutoTokenizer.from_pretrained("NousResearch/Hermes-2-Pro-Llama-3-8B")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model NousResearch/Hermes-2-Pro-Llama-3-8B before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). | {"tags": ["pruna-ai"], "metrics": ["memory_disk", "memory_inference", "inference_latency", "inference_throughput", "inference_CO2_emissions", "inference_energy_consumption"], "thumbnail": "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg", "base_model": "NousResearch/Hermes-2-Pro-Llama-3-8B"} | PrunaAI/NousResearch-Hermes-2-Pro-Llama-3-8B-HQQ-1bit-smashed | null | [
"transformers",
"llama",
"text-generation",
"pruna-ai",
"conversational",
"base_model:NousResearch/Hermes-2-Pro-Llama-3-8B",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-02T13:16:02+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5992
- Accuracy: 0.7980
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 204 | 0.7084 | 0.7635 |
| No log | 2.0 | 408 | 0.5992 | 0.7980 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "distilbert-base-uncased", "model-index": [{"name": "my_awesome_model", "results": []}]} | ilyi/distill-bert-uncased-tweeteval-emotion | null | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T13:16:04+00:00 |
text-generation | transformers | <!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/CP4VSgck)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with hqq.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo NousResearch/Hermes-2-Pro-Llama-3-8B installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install hqq
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from hqq.engine.hf import HQQModelForCausalLM
from hqq.models.hf.base import AutoHQQHFModel
try:
model = HQQModelForCausalLM.from_quantized("PrunaAI/NousResearch-Hermes-2-Pro-Llama-3-8B-HQQ-4bit-smashed", device_map='auto')
except:
model = AutoHQQHFModel.from_quantized("PrunaAI/NousResearch-Hermes-2-Pro-Llama-3-8B-HQQ-4bit-smashed")
tokenizer = AutoTokenizer.from_pretrained("NousResearch/Hermes-2-Pro-Llama-3-8B")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model NousResearch/Hermes-2-Pro-Llama-3-8B before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). | {"tags": ["pruna-ai"], "metrics": ["memory_disk", "memory_inference", "inference_latency", "inference_throughput", "inference_CO2_emissions", "inference_energy_consumption"], "thumbnail": "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg", "base_model": "NousResearch/Hermes-2-Pro-Llama-3-8B"} | PrunaAI/NousResearch-Hermes-2-Pro-Llama-3-8B-HQQ-4bit-smashed | null | [
"transformers",
"llama",
"text-generation",
"pruna-ai",
"conversational",
"base_model:NousResearch/Hermes-2-Pro-Llama-3-8B",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-02T13:16:37+00:00 |
text-generation | transformers | <!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/CP4VSgck)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with hqq.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo NousResearch/Hermes-2-Pro-Llama-3-8B installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install hqq
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from hqq.engine.hf import HQQModelForCausalLM
from hqq.models.hf.base import AutoHQQHFModel
try:
model = HQQModelForCausalLM.from_quantized("PrunaAI/NousResearch-Hermes-2-Pro-Llama-3-8B-HQQ-2bit-smashed", device_map='auto')
except:
model = AutoHQQHFModel.from_quantized("PrunaAI/NousResearch-Hermes-2-Pro-Llama-3-8B-HQQ-2bit-smashed")
tokenizer = AutoTokenizer.from_pretrained("NousResearch/Hermes-2-Pro-Llama-3-8B")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model NousResearch/Hermes-2-Pro-Llama-3-8B before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). | {"tags": ["pruna-ai"], "metrics": ["memory_disk", "memory_inference", "inference_latency", "inference_throughput", "inference_CO2_emissions", "inference_energy_consumption"], "thumbnail": "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg", "base_model": "NousResearch/Hermes-2-Pro-Llama-3-8B"} | PrunaAI/NousResearch-Hermes-2-Pro-Llama-3-8B-HQQ-2bit-smashed | null | [
"transformers",
"llama",
"text-generation",
"pruna-ai",
"conversational",
"base_model:NousResearch/Hermes-2-Pro-Llama-3-8B",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-02T13:17:23+00:00 |
text-generation | transformers | <!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/CP4VSgck)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with awq.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo NousResearch/Hermes-2-Pro-Llama-3-8B installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install autoawq
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from awq import AutoAWQForCausalLM
model = AutoAWQForCausalLM.from_quantized("PrunaAI/NousResearch-Hermes-2-Pro-Llama-3-8B-AWQ-4bit-smashed", trust_remote_code=True, device_map='auto')
tokenizer = AutoTokenizer.from_pretrained("NousResearch/Hermes-2-Pro-Llama-3-8B")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model NousResearch/Hermes-2-Pro-Llama-3-8B before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). | {"tags": ["pruna-ai"], "metrics": ["memory_disk", "memory_inference", "inference_latency", "inference_throughput", "inference_CO2_emissions", "inference_energy_consumption"], "thumbnail": "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg", "base_model": "NousResearch/Hermes-2-Pro-Llama-3-8B"} | PrunaAI/NousResearch-Hermes-2-Pro-Llama-3-8B-AWQ-4bit-smashed | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"pruna-ai",
"conversational",
"base_model:NousResearch/Hermes-2-Pro-Llama-3-8B",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | null | 2024-05-02T13:18:02+00:00 |
null | null | {} | squaadinc/1714655890617x912788715466391600 | null | [
"region:us"
] | null | 2024-05-02T13:18:13+00:00 |
|
token-classification | transformers | {} | pontusnorman123/layoutlmv3-finetuned-sweset3_wild500_v3 | null | [
"transformers",
"tensorboard",
"safetensors",
"layoutlmv3",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T13:18:34+00:00 |
|
null | null | {} | jakubrevaj/virtual | null | [
"region:us"
] | null | 2024-05-02T13:18:50+00:00 |
|
text-classification | setfit |
# SetFit Aspect Model with sentence-transformers/paraphrase-mpnet-base-v2
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Aspect Based Sentiment Analysis (ABSA). This SetFit model uses [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification. In particular, this model is in charge of filtering aspect span candidates.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
This model was trained within the context of a larger system for ABSA, which looks like so:
1. Use a spaCy model to select possible aspect span candidates.
2. **Use this SetFit model to filter these possible aspect span candidates.**
3. Use a SetFit model to classify the filtered aspect span candidates.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **spaCy Model:** en_core_web_lg
- **SetFitABSA Aspect Model:** [zeroix07/setfit-absa-model-aspect](https://huggingface.co/zeroix07/setfit-absa-model-aspect)
- **SetFitABSA Polarity Model:** [zeroix07/setfit-absa-model-polarity](https://huggingface.co/zeroix07/setfit-absa-model-polarity)
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 2 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:----------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| no aspect | <ul><li>'food:The food is really delicious! The meat is tender and the spices are well seasoned. I will definitely come back again.'</li><li>'meat:The food is really delicious! The meat is tender and the spices are well seasoned. I will definitely come back again.'</li><li>'spices:The food is really delicious! The meat is tender and the spices are well seasoned. I will definitely come back again.'</li></ul> |
| aspect | <ul><li>'Service:Service is standard, nothing extraordinary.'</li><li>'Service:Service from the staff is very friendly.'</li><li>'Service:Service from the staff is very fast and professional.'</li></ul> |
## Evaluation
### Metrics
| Label | Accuracy |
|:--------|:---------|
| **all** | 1.0 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import AbsaModel
# Download from the 🤗 Hub
model = AbsaModel.from_pretrained(
"zeroix07/setfit-absa-model-aspect",
"zeroix07/setfit-absa-model-polarity",
)
# Run inference
preds = model("The food was great, but the venue is just way too busy.")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:--------|:----|
| Word count | 4 | 14.3487 | 72 |
| Label | Training Sample Count |
|:----------|:----------------------|
| no aspect | 1701 |
| aspect | 14 |
### Training Hyperparameters
- batch_size: (4, 4)
- num_epochs: (1, 1)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 20
- body_learning_rate: (2e-05, 1e-05)
- head_learning_rate: 0.01
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:-----:|:-------------:|:---------------:|
| 0.0001 | 1 | 0.34 | - |
| 0.0029 | 50 | 0.318 | - |
| 0.0058 | 100 | 0.2344 | - |
| 0.0087 | 150 | 0.1925 | - |
| 0.0117 | 200 | 0.1893 | - |
| 0.0146 | 250 | 0.014 | - |
| 0.0175 | 300 | 0.0017 | - |
| 0.0204 | 350 | 0.0041 | - |
| 0.0233 | 400 | 0.0008 | - |
| 0.0262 | 450 | 0.0008 | - |
| 0.0292 | 500 | 0.0003 | - |
| 0.0321 | 550 | 0.0003 | - |
| 0.0350 | 600 | 0.0004 | - |
| 0.0379 | 650 | 0.0004 | - |
| 0.0408 | 700 | 0.0004 | - |
| 0.0437 | 750 | 0.0008 | - |
| 0.0466 | 800 | 0.0004 | - |
| 0.0496 | 850 | 0.0002 | - |
| 0.0525 | 900 | 0.0003 | - |
| 0.0554 | 950 | 0.0001 | - |
| 0.0583 | 1000 | 0.0001 | - |
| 0.0612 | 1050 | 0.0002 | - |
| 0.0641 | 1100 | 0.0002 | - |
| 0.0671 | 1150 | 0.0002 | - |
| 0.0700 | 1200 | 0.0001 | - |
| 0.0729 | 1250 | 0.0002 | - |
| 0.0758 | 1300 | 0.0001 | - |
| 0.0787 | 1350 | 0.0 | - |
| 0.0816 | 1400 | 0.0001 | - |
| 0.0845 | 1450 | 0.0001 | - |
| 0.0875 | 1500 | 0.0001 | - |
| 0.0904 | 1550 | 0.0001 | - |
| 0.0933 | 1600 | 0.0001 | - |
| 0.0962 | 1650 | 0.0001 | - |
| 0.0991 | 1700 | 0.0 | - |
| 0.1020 | 1750 | 0.0001 | - |
| 0.1050 | 1800 | 0.0001 | - |
| 0.1079 | 1850 | 0.0001 | - |
| 0.1108 | 1900 | 0.0001 | - |
| 0.1137 | 1950 | 0.0 | - |
| 0.1166 | 2000 | 0.0001 | - |
| 0.1195 | 2050 | 0.0001 | - |
| 0.1224 | 2100 | 0.0 | - |
| 0.1254 | 2150 | 0.0006 | - |
| 0.1283 | 2200 | 0.0002 | - |
| 0.1312 | 2250 | 0.0 | - |
| 0.1341 | 2300 | 0.0 | - |
| 0.1370 | 2350 | 0.2106 | - |
| 0.1399 | 2400 | 0.0 | - |
| 0.1429 | 2450 | 0.0001 | - |
| 0.1458 | 2500 | 0.0001 | - |
| 0.1487 | 2550 | 0.0 | - |
| 0.1516 | 2600 | 0.0 | - |
| 0.1545 | 2650 | 0.0 | - |
| 0.1574 | 2700 | 0.0 | - |
| 0.1603 | 2750 | 0.0 | - |
| 0.1633 | 2800 | 0.0 | - |
| 0.1662 | 2850 | 0.0001 | - |
| 0.1691 | 2900 | 0.0 | - |
| 0.1720 | 2950 | 0.0 | - |
| 0.1749 | 3000 | 0.0 | - |
| 0.1778 | 3050 | 0.0001 | - |
| 0.1808 | 3100 | 0.0 | - |
| 0.1837 | 3150 | 0.0 | - |
| 0.1866 | 3200 | 0.0001 | - |
| 0.1895 | 3250 | 0.0 | - |
| 0.1924 | 3300 | 0.0001 | - |
| 0.1953 | 3350 | 0.0001 | - |
| 0.1983 | 3400 | 0.0 | - |
| 0.2012 | 3450 | 0.0 | - |
| 0.2041 | 3500 | 0.0 | - |
| 0.2070 | 3550 | 0.0 | - |
| 0.2099 | 3600 | 0.0 | - |
| 0.2128 | 3650 | 0.0 | - |
| 0.2157 | 3700 | 0.0 | - |
| 0.2187 | 3750 | 0.0 | - |
| 0.2216 | 3800 | 0.0 | - |
| 0.2245 | 3850 | 0.0 | - |
| 0.2274 | 3900 | 0.0 | - |
| 0.2303 | 3950 | 0.0 | - |
| 0.2332 | 4000 | 0.0 | - |
| 0.2362 | 4050 | 0.0 | - |
| 0.2391 | 4100 | 0.0 | - |
| 0.2420 | 4150 | 0.0 | - |
| 0.2449 | 4200 | 0.0 | - |
| 0.2478 | 4250 | 0.0 | - |
| 0.2507 | 4300 | 0.0 | - |
| 0.2536 | 4350 | 0.0 | - |
| 0.2566 | 4400 | 0.0 | - |
| 0.2595 | 4450 | 0.0 | - |
| 0.2624 | 4500 | 0.0 | - |
| 0.2653 | 4550 | 0.0 | - |
| 0.2682 | 4600 | 0.0 | - |
| 0.2711 | 4650 | 0.0 | - |
| 0.2741 | 4700 | 0.0001 | - |
| 0.2770 | 4750 | 0.0 | - |
| 0.2799 | 4800 | 0.0 | - |
| 0.2828 | 4850 | 0.0 | - |
| 0.2857 | 4900 | 0.0 | - |
| 0.2886 | 4950 | 0.0 | - |
| 0.2915 | 5000 | 0.0 | - |
| 0.2945 | 5050 | 0.0 | - |
| 0.2974 | 5100 | 0.0 | - |
| 0.3003 | 5150 | 0.0 | - |
| 0.3032 | 5200 | 0.0 | - |
| 0.3061 | 5250 | 0.0 | - |
| 0.3090 | 5300 | 0.0 | - |
| 0.3120 | 5350 | 0.0 | - |
| 0.3149 | 5400 | 0.0 | - |
| 0.3178 | 5450 | 0.0 | - |
| 0.3207 | 5500 | 0.0 | - |
| 0.3236 | 5550 | 0.0 | - |
| 0.3265 | 5600 | 0.0 | - |
| 0.3294 | 5650 | 0.0 | - |
| 0.3324 | 5700 | 0.0 | - |
| 0.3353 | 5750 | 0.0 | - |
| 0.3382 | 5800 | 0.0 | - |
| 0.3411 | 5850 | 0.0 | - |
| 0.3440 | 5900 | 0.0 | - |
| 0.3469 | 5950 | 0.0 | - |
| 0.3499 | 6000 | 0.0 | - |
| 0.3528 | 6050 | 0.0 | - |
| 0.3557 | 6100 | 0.0 | - |
| 0.3586 | 6150 | 0.0 | - |
| 0.3615 | 6200 | 0.0 | - |
| 0.3644 | 6250 | 0.0 | - |
| 0.3673 | 6300 | 0.0 | - |
| 0.3703 | 6350 | 0.0 | - |
| 0.3732 | 6400 | 0.0001 | - |
| 0.3761 | 6450 | 0.0 | - |
| 0.3790 | 6500 | 0.0 | - |
| 0.3819 | 6550 | 0.0 | - |
| 0.3848 | 6600 | 0.0 | - |
| 0.3878 | 6650 | 0.0 | - |
| 0.3907 | 6700 | 0.0 | - |
| 0.3936 | 6750 | 0.0 | - |
| 0.3965 | 6800 | 0.0 | - |
| 0.3994 | 6850 | 0.0 | - |
| 0.4023 | 6900 | 0.0 | - |
| 0.4052 | 6950 | 0.0 | - |
| 0.4082 | 7000 | 0.0 | - |
| 0.4111 | 7050 | 0.0 | - |
| 0.4140 | 7100 | 0.0001 | - |
| 0.4169 | 7150 | 0.0 | - |
| 0.4198 | 7200 | 0.0 | - |
| 0.4227 | 7250 | 0.0 | - |
| 0.4257 | 7300 | 0.0 | - |
| 0.4286 | 7350 | 0.0 | - |
| 0.4315 | 7400 | 0.0 | - |
| 0.4344 | 7450 | 0.0 | - |
| 0.4373 | 7500 | 0.0 | - |
| 0.4402 | 7550 | 0.0 | - |
| 0.4431 | 7600 | 0.0 | - |
| 0.4461 | 7650 | 0.0 | - |
| 0.4490 | 7700 | 0.0 | - |
| 0.4519 | 7750 | 0.0 | - |
| 0.4548 | 7800 | 0.0 | - |
| 0.4577 | 7850 | 0.0 | - |
| 0.4606 | 7900 | 0.0 | - |
| 0.4636 | 7950 | 0.0 | - |
| 0.4665 | 8000 | 0.0 | - |
| 0.4694 | 8050 | 0.0 | - |
| 0.4723 | 8100 | 0.0 | - |
| 0.4752 | 8150 | 0.0 | - |
| 0.4781 | 8200 | 0.0 | - |
| 0.4810 | 8250 | 0.0 | - |
| 0.4840 | 8300 | 0.0 | - |
| 0.4869 | 8350 | 0.0001 | - |
| 0.4898 | 8400 | 0.0 | - |
| 0.4927 | 8450 | 0.0 | - |
| 0.4956 | 8500 | 0.0 | - |
| 0.4985 | 8550 | 0.0 | - |
| 0.5015 | 8600 | 0.0 | - |
| 0.5044 | 8650 | 0.0 | - |
| 0.5073 | 8700 | 0.0 | - |
| 0.5102 | 8750 | 0.0 | - |
| 0.5131 | 8800 | 0.0 | - |
| 0.5160 | 8850 | 0.0 | - |
| 0.5190 | 8900 | 0.0 | - |
| 0.5219 | 8950 | 0.0 | - |
| 0.5248 | 9000 | 0.0 | - |
| 0.5277 | 9050 | 0.0 | - |
| 0.5306 | 9100 | 0.0 | - |
| 0.5335 | 9150 | 0.0 | - |
| 0.5364 | 9200 | 0.0 | - |
| 0.5394 | 9250 | 0.0 | - |
| 0.5423 | 9300 | 0.0 | - |
| 0.5452 | 9350 | 0.0 | - |
| 0.5481 | 9400 | 0.0 | - |
| 0.5510 | 9450 | 0.0 | - |
| 0.5539 | 9500 | 0.0 | - |
| 0.5569 | 9550 | 0.0 | - |
| 0.5598 | 9600 | 0.0 | - |
| 0.5627 | 9650 | 0.0 | - |
| 0.5656 | 9700 | 0.0 | - |
| 0.5685 | 9750 | 0.0 | - |
| 0.5714 | 9800 | 0.0 | - |
| 0.5743 | 9850 | 0.0 | - |
| 0.5773 | 9900 | 0.0 | - |
| 0.5802 | 9950 | 0.0 | - |
| 0.5831 | 10000 | 0.0 | - |
| 0.5860 | 10050 | 0.0 | - |
| 0.5889 | 10100 | 0.0 | - |
| 0.5918 | 10150 | 0.0 | - |
| 0.5948 | 10200 | 0.0 | - |
| 0.5977 | 10250 | 0.0 | - |
| 0.6006 | 10300 | 0.0 | - |
| 0.6035 | 10350 | 0.0 | - |
| 0.6064 | 10400 | 0.0 | - |
| 0.6093 | 10450 | 0.0 | - |
| 0.6122 | 10500 | 0.0 | - |
| 0.6152 | 10550 | 0.0 | - |
| 0.6181 | 10600 | 0.0 | - |
| 0.6210 | 10650 | 0.0 | - |
| 0.6239 | 10700 | 0.0 | - |
| 0.6268 | 10750 | 0.0 | - |
| 0.6297 | 10800 | 0.0 | - |
| 0.6327 | 10850 | 0.0 | - |
| 0.6356 | 10900 | 0.0 | - |
| 0.6385 | 10950 | 0.0 | - |
| 0.6414 | 11000 | 0.0 | - |
| 0.6443 | 11050 | 0.0 | - |
| 0.6472 | 11100 | 0.0 | - |
| 0.6501 | 11150 | 0.0 | - |
| 0.6531 | 11200 | 0.0 | - |
| 0.6560 | 11250 | 0.0 | - |
| 0.6589 | 11300 | 0.0 | - |
| 0.6618 | 11350 | 0.0 | - |
| 0.6647 | 11400 | 0.0 | - |
| 0.6676 | 11450 | 0.0 | - |
| 0.6706 | 11500 | 0.0 | - |
| 0.6735 | 11550 | 0.0 | - |
| 0.6764 | 11600 | 0.0 | - |
| 0.6793 | 11650 | 0.0 | - |
| 0.6822 | 11700 | 0.0 | - |
| 0.6851 | 11750 | 0.0 | - |
| 0.6880 | 11800 | 0.0 | - |
| 0.6910 | 11850 | 0.0 | - |
| 0.6939 | 11900 | 0.0 | - |
| 0.6968 | 11950 | 0.0 | - |
| 0.6997 | 12000 | 0.0 | - |
| 0.7026 | 12050 | 0.0 | - |
| 0.7055 | 12100 | 0.0 | - |
| 0.7085 | 12150 | 0.0 | - |
| 0.7114 | 12200 | 0.0 | - |
| 0.7143 | 12250 | 0.0 | - |
| 0.7172 | 12300 | 0.0 | - |
| 0.7201 | 12350 | 0.0 | - |
| 0.7230 | 12400 | 0.0 | - |
| 0.7259 | 12450 | 0.0 | - |
| 0.7289 | 12500 | 0.0 | - |
| 0.7318 | 12550 | 0.0 | - |
| 0.7347 | 12600 | 0.0 | - |
| 0.7376 | 12650 | 0.0 | - |
| 0.7405 | 12700 | 0.0 | - |
| 0.7434 | 12750 | 0.0 | - |
| 0.7464 | 12800 | 0.0 | - |
| 0.7493 | 12850 | 0.0 | - |
| 0.7522 | 12900 | 0.0 | - |
| 0.7551 | 12950 | 0.0 | - |
| 0.7580 | 13000 | 0.0 | - |
| 0.7609 | 13050 | 0.0 | - |
| 0.7638 | 13100 | 0.0 | - |
| 0.7668 | 13150 | 0.0 | - |
| 0.7697 | 13200 | 0.0 | - |
| 0.7726 | 13250 | 0.0 | - |
| 0.7755 | 13300 | 0.0 | - |
| 0.7784 | 13350 | 0.0 | - |
| 0.7813 | 13400 | 0.0 | - |
| 0.7843 | 13450 | 0.0 | - |
| 0.7872 | 13500 | 0.0 | - |
| 0.7901 | 13550 | 0.0 | - |
| 0.7930 | 13600 | 0.0 | - |
| 0.7959 | 13650 | 0.0 | - |
| 0.7988 | 13700 | 0.0 | - |
| 0.8017 | 13750 | 0.0 | - |
| 0.8047 | 13800 | 0.0 | - |
| 0.8076 | 13850 | 0.0 | - |
| 0.8105 | 13900 | 0.0 | - |
| 0.8134 | 13950 | 0.0 | - |
| 0.8163 | 14000 | 0.0 | - |
| 0.8192 | 14050 | 0.0 | - |
| 0.8222 | 14100 | 0.0 | - |
| 0.8251 | 14150 | 0.0 | - |
| 0.8280 | 14200 | 0.0 | - |
| 0.8309 | 14250 | 0.0 | - |
| 0.8338 | 14300 | 0.0 | - |
| 0.8367 | 14350 | 0.0 | - |
| 0.8397 | 14400 | 0.0 | - |
| 0.8426 | 14450 | 0.0 | - |
| 0.8455 | 14500 | 0.0 | - |
| 0.8484 | 14550 | 0.0 | - |
| 0.8513 | 14600 | 0.0 | - |
| 0.8542 | 14650 | 0.0 | - |
| 0.8571 | 14700 | 0.0 | - |
| 0.8601 | 14750 | 0.0 | - |
| 0.8630 | 14800 | 0.0 | - |
| 0.8659 | 14850 | 0.0 | - |
| 0.8688 | 14900 | 0.0 | - |
| 0.8717 | 14950 | 0.0 | - |
| 0.8746 | 15000 | 0.0 | - |
| 0.8776 | 15050 | 0.0 | - |
| 0.8805 | 15100 | 0.0 | - |
| 0.8834 | 15150 | 0.0 | - |
| 0.8863 | 15200 | 0.0 | - |
| 0.8892 | 15250 | 0.0 | - |
| 0.8921 | 15300 | 0.0 | - |
| 0.8950 | 15350 | 0.0 | - |
| 0.8980 | 15400 | 0.0 | - |
| 0.9009 | 15450 | 0.0 | - |
| 0.9038 | 15500 | 0.0 | - |
| 0.9067 | 15550 | 0.0 | - |
| 0.9096 | 15600 | 0.0 | - |
| 0.9125 | 15650 | 0.0 | - |
| 0.9155 | 15700 | 0.0 | - |
| 0.9184 | 15750 | 0.0 | - |
| 0.9213 | 15800 | 0.0 | - |
| 0.9242 | 15850 | 0.0 | - |
| 0.9271 | 15900 | 0.0 | - |
| 0.9300 | 15950 | 0.0 | - |
| 0.9329 | 16000 | 0.0 | - |
| 0.9359 | 16050 | 0.0 | - |
| 0.9388 | 16100 | 0.0 | - |
| 0.9417 | 16150 | 0.0 | - |
| 0.9446 | 16200 | 0.0 | - |
| 0.9475 | 16250 | 0.0 | - |
| 0.9504 | 16300 | 0.0 | - |
| 0.9534 | 16350 | 0.0 | - |
| 0.9563 | 16400 | 0.0 | - |
| 0.9592 | 16450 | 0.0 | - |
| 0.9621 | 16500 | 0.0 | - |
| 0.9650 | 16550 | 0.0 | - |
| 0.9679 | 16600 | 0.0 | - |
| 0.9708 | 16650 | 0.0 | - |
| 0.9738 | 16700 | 0.0 | - |
| 0.9767 | 16750 | 0.0 | - |
| 0.9796 | 16800 | 0.0 | - |
| 0.9825 | 16850 | 0.0 | - |
| 0.9854 | 16900 | 0.0 | - |
| 0.9883 | 16950 | 0.0 | - |
| 0.9913 | 17000 | 0.0 | - |
| 0.9942 | 17050 | 0.0 | - |
| 0.9971 | 17100 | 0.0 | - |
| 1.0 | 17150 | 0.0 | - |
### Framework Versions
- Python: 3.10.13
- SetFit: 1.0.3
- Sentence Transformers: 2.7.0
- spaCy: 3.7.4
- Transformers: 4.39.3
- PyTorch: 2.1.2
- Datasets: 2.18.0
- Tokenizers: 0.15.2
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> | {"library_name": "setfit", "tags": ["setfit", "absa", "sentence-transformers", "text-classification", "generated_from_setfit_trainer"], "metrics": ["accuracy"], "base_model": "sentence-transformers/paraphrase-mpnet-base-v2", "widget": [{"text": "food portions:The food portions are quite filling, but not too much."}, {"text": "waiters:The waiters are quite alert in helping customers, but cannot always answer all questions in detail."}, {"text": "experience:The atmosphere here is pleasant, although it doesn't provide an extraordinary experience."}, {"text": "food:The food does not have a distinctive taste."}, {"text": "restaurant atmosphere:The restaurant atmosphere is too stiff and unpleasant."}], "pipeline_tag": "text-classification", "inference": false, "model-index": [{"name": "SetFit Aspect Model with sentence-transformers/paraphrase-mpnet-base-v2", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "Unknown", "type": "unknown", "split": "test"}, "metrics": [{"type": "accuracy", "value": 1.0, "name": "Accuracy"}]}]}]} | zeroix07/setfit-absa-model-aspect | null | [
"setfit",
"safetensors",
"mpnet",
"absa",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:sentence-transformers/paraphrase-mpnet-base-v2",
"model-index",
"region:us"
] | null | 2024-05-02T13:19:24+00:00 |
text-classification | setfit |
# SetFit Polarity Model with sentence-transformers/paraphrase-mpnet-base-v2
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Aspect Based Sentiment Analysis (ABSA). This SetFit model uses [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification. In particular, this model is in charge of classifying aspect polarities.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
This model was trained within the context of a larger system for ABSA, which looks like so:
1. Use a spaCy model to select possible aspect span candidates.
2. Use a SetFit model to filter these possible aspect span candidates.
3. **Use this SetFit model to classify the filtered aspect span candidates.**
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **spaCy Model:** en_core_web_lg
- **SetFitABSA Aspect Model:** [zeroix07/setfit-absa-model-aspect](https://huggingface.co/zeroix07/setfit-absa-model-aspect)
- **SetFitABSA Polarity Model:** [zeroix07/setfit-absa-model-polarity](https://huggingface.co/zeroix07/setfit-absa-model-polarity)
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 3 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:---------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Neutral | <ul><li>'Service is standard,:Service is standard, nothing extraordinary.'</li><li>'Service is quite fast:Service is quite fast and quite friendly.'</li><li>'Service that is quite:Service that is quite efficient but not friendly makes the dining experience neutral.'</li></ul> |
| Positive | <ul><li>'Service from the staff:Service from the staff is very friendly.'</li><li>'Service from the staff:Service from the staff is very fast and professional.'</li><li>'Service from the staff:Service from the staff is quite friendly and helpful.'</li></ul> |
| Negative | <ul><li>'Service is very slow:Service is very slow and not friendly at all.'</li><li>'Service is very slow:Service is very slow and inefficient.'</li><li>'Service is very slow:Service is very slow and unresponsive.'</li></ul> |
## Evaluation
### Metrics
| Label | Accuracy |
|:--------|:---------|
| **all** | 1.0 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import AbsaModel
# Download from the 🤗 Hub
model = AbsaModel.from_pretrained(
"zeroix07/setfit-absa-model-aspect",
"zeroix07/setfit-absa-model-polarity",
)
# Run inference
preds = model("The food was great, but the venue is just way too busy.")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:--------|:----|
| Word count | 7 | 11.1429 | 16 |
| Label | Training Sample Count |
|:---------|:----------------------|
| Negative | 3 |
| Neutral | 6 |
| Positive | 5 |
### Training Hyperparameters
- batch_size: (4, 4)
- num_epochs: (1, 1)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 20
- body_learning_rate: (2e-05, 1e-05)
- head_learning_rate: 0.01
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:----:|:-------------:|:---------------:|
| 0.0071 | 1 | 0.153 | - |
| 0.3571 | 50 | 0.0035 | - |
| 0.7143 | 100 | 0.001 | - |
### Framework Versions
- Python: 3.10.13
- SetFit: 1.0.3
- Sentence Transformers: 2.7.0
- spaCy: 3.7.4
- Transformers: 4.39.3
- PyTorch: 2.1.2
- Datasets: 2.18.0
- Tokenizers: 0.15.2
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> | {"library_name": "setfit", "tags": ["setfit", "absa", "sentence-transformers", "text-classification", "generated_from_setfit_trainer"], "metrics": ["accuracy"], "base_model": "sentence-transformers/paraphrase-mpnet-base-v2", "widget": [{"text": "Service is quite friendly:Service is quite friendly, not too special but not bad either."}, {"text": "Service was amazingly fast:Service was amazingly fast and efficient, making the visit very enjoyable."}, {"text": "Service is quite good:Service is quite good, not too special but not bad either."}], "pipeline_tag": "text-classification", "inference": false, "model-index": [{"name": "SetFit Polarity Model with sentence-transformers/paraphrase-mpnet-base-v2", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "Unknown", "type": "unknown", "split": "test"}, "metrics": [{"type": "accuracy", "value": 1.0, "name": "Accuracy"}]}]}]} | zeroix07/setfit-absa-model-polarity | null | [
"setfit",
"safetensors",
"mpnet",
"absa",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:sentence-transformers/paraphrase-mpnet-base-v2",
"model-index",
"region:us"
] | null | 2024-05-02T13:19:40+00:00 |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | SotirisLegkas/value_multi_38 | null | [
"transformers",
"safetensors",
"roberta",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T13:21:41+00:00 |
null | null | {} | Nurinissa/distilbert-base-uncased-finetuned-emotion | null | [
"region:us"
] | null | 2024-05-02T13:21:50+00:00 |
|
text-generation | transformers | {} | Gopika2233/Llama-2-7b-chat-finetune | null | [
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-02T13:22:21+00:00 |
|
null | null | {} | osma/finna-hkm-images | null | [
"region:us"
] | null | 2024-05-02T13:23:41+00:00 |
|
null | null | {} | bakkensus/phi-2-all-at-once-64-gguf | null | [
"gguf",
"region:us"
] | null | 2024-05-02T13:24:30+00:00 |
|
reinforcement-learning | stable-baselines3 |
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
| {"library_name": "stable-baselines3", "tags": ["LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "stable-baselines3"], "model-index": [{"name": "PPO", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "LunarLander-v2", "type": "LunarLander-v2"}, "metrics": [{"type": "mean_reward", "value": "301.73 +/- 19.61", "name": "mean_reward", "verified": false}]}]}]} | davideaguglia/PPO-LunarLander-v2 | null | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | null | 2024-05-02T13:24:52+00:00 |
feature-extraction | transformers | # fine-tuned/jina-embeddings-v2-base-en-02052024-2a6pbxm4b-webapp_8647177611
## Model Description
fine-tuned/jina-embeddings-v2-base-en-02052024-2a6pbxm4b-webapp_8647177611 is a fine-tuned version of jinaai/jina-embeddings-v2-base-en designed for a specific domain.
## Use Case
This model is designed to support various applications in natural language processing and understanding.
## Associated Dataset
This the dataset for this model can be found [**here**](https://huggingface.co/datasets/fine-tuned/fine-tuned/jina-embeddings-v2-base-en-02052024-2a6pbxm4b-webapp_8647177611).
## How to Use
This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started:
```python
from transformers import AutoModel, AutoTokenizer
llm_name = "fine-tuned/jina-embeddings-v2-base-en-02052024-2a6pbxm4b-webapp_8647177611"
tokenizer = AutoTokenizer.from_pretrained(llm_name)
model = AutoModel.from_pretrained(llm_name, trust_remote_code=True)
tokens = tokenizer("Your text here", return_tensors="pt")
embedding = model(**tokens)
```
| {} | fine-tuned/jina-embeddings-v2-base-en-02052024-2a6pbxm4b-webapp_8647177611 | null | [
"transformers",
"safetensors",
"bert",
"feature-extraction",
"custom_code",
"region:us"
] | null | 2024-05-02T13:26:08+00:00 |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nils3.0
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2
- training_steps: 10
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.10.1.dev0
- Transformers 4.36.2
- Pytorch 2.3.0+cu121
- Datasets 2.16.0
- Tokenizers 0.15.2 | {"license": "apache-2.0", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "mistralai/Mistral-7B-v0.1", "model-index": [{"name": "nils3.0", "results": []}]} | pilsneyrouset/nils3.0 | null | [
"peft",
"safetensors",
"mistral",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"region:us"
] | null | 2024-05-02T13:26:19+00:00 |
image-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-eurosat
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "microsoft/swin-tiny-patch4-window7-224", "model-index": [{"name": "swin-tiny-patch4-window7-224-finetuned-eurosat", "results": []}]} | LIZ009/swin-tiny-patch4-window7-224-finetuned-eurosat | null | [
"transformers",
"tensorboard",
"safetensors",
"swin",
"image-classification",
"generated_from_trainer",
"base_model:microsoft/swin-tiny-patch4-window7-224",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T13:26:25+00:00 |
null | transformers |
# Uploaded model
- **Developed by:** tingting
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "trl"], "base_model": "unsloth/mistral-7b-bnb-4bit"} | tingting/mistral7b_lora_model_balanced_Data_240 | null | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:unsloth/mistral-7b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T13:26:40+00:00 |
null | null | Some personally collected .splat file to be viewed with this viewer:
https://github.com/antimatter15/splat
| {} | gvitucci/gaussianSplats | null | [
"region:us"
] | null | 2024-05-02T13:26:51+00:00 |
null | transformers | {"license": "gpl-2.0"} | jncraton/flan-t5-large-pirate-v0.2-ct2-int8 | null | [
"transformers",
"license:gpl-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T13:27:11+00:00 |
|
null | null | {"license": "mit"} | Diummast/ChudnovskyAlgorithm | null | [
"license:mit",
"region:us"
] | null | 2024-05-02T13:28:03+00:00 |
|
null | null | {} | samzirbo/mt5_baseline | null | [
"region:us"
] | null | 2024-05-02T13:28:06+00:00 |
|
null | null | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
deepseek-coder-33b-instruct - GGUF
- Model creator: https://huggingface.co/deepseek-ai/
- Original model: https://huggingface.co/deepseek-ai/deepseek-coder-33b-instruct/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [deepseek-coder-33b-instruct.Q2_K.gguf](https://huggingface.co/RichardErkhov/deepseek-ai_-_deepseek-coder-33b-instruct-gguf/blob/main/deepseek-coder-33b-instruct.Q2_K.gguf) | Q2_K | 11.51GB |
| [deepseek-coder-33b-instruct.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/deepseek-ai_-_deepseek-coder-33b-instruct-gguf/blob/main/deepseek-coder-33b-instruct.IQ3_XS.gguf) | IQ3_XS | 12.76GB |
| [deepseek-coder-33b-instruct.IQ3_S.gguf](https://huggingface.co/RichardErkhov/deepseek-ai_-_deepseek-coder-33b-instruct-gguf/blob/main/deepseek-coder-33b-instruct.IQ3_S.gguf) | IQ3_S | 13.49GB |
| [deepseek-coder-33b-instruct.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/deepseek-ai_-_deepseek-coder-33b-instruct-gguf/blob/main/deepseek-coder-33b-instruct.Q3_K_S.gguf) | Q3_K_S | 13.43GB |
| [deepseek-coder-33b-instruct.IQ3_M.gguf](https://huggingface.co/RichardErkhov/deepseek-ai_-_deepseek-coder-33b-instruct-gguf/blob/main/deepseek-coder-33b-instruct.IQ3_M.gguf) | IQ3_M | 14.0GB |
| [deepseek-coder-33b-instruct.Q3_K.gguf](https://huggingface.co/RichardErkhov/deepseek-ai_-_deepseek-coder-33b-instruct-gguf/blob/main/deepseek-coder-33b-instruct.Q3_K.gguf) | Q3_K | 14.99GB |
| [deepseek-coder-33b-instruct.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/deepseek-ai_-_deepseek-coder-33b-instruct-gguf/blob/main/deepseek-coder-33b-instruct.Q3_K_M.gguf) | Q3_K_M | 14.99GB |
| [deepseek-coder-33b-instruct.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/deepseek-ai_-_deepseek-coder-33b-instruct-gguf/blob/main/deepseek-coder-33b-instruct.Q3_K_L.gguf) | Q3_K_L | 16.35GB |
| [deepseek-coder-33b-instruct.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/deepseek-ai_-_deepseek-coder-33b-instruct-gguf/blob/main/deepseek-coder-33b-instruct.IQ4_XS.gguf) | IQ4_XS | 16.77GB |
| [deepseek-coder-33b-instruct.Q4_0.gguf](https://huggingface.co/RichardErkhov/deepseek-ai_-_deepseek-coder-33b-instruct-gguf/blob/main/deepseek-coder-33b-instruct.Q4_0.gguf) | Q4_0 | 17.53GB |
| [deepseek-coder-33b-instruct.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/deepseek-ai_-_deepseek-coder-33b-instruct-gguf/blob/main/deepseek-coder-33b-instruct.IQ4_NL.gguf) | IQ4_NL | 17.69GB |
| [deepseek-coder-33b-instruct.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/deepseek-ai_-_deepseek-coder-33b-instruct-gguf/blob/main/deepseek-coder-33b-instruct.Q4_K_S.gguf) | Q4_K_S | 17.64GB |
| [deepseek-coder-33b-instruct.Q4_K.gguf](https://huggingface.co/RichardErkhov/deepseek-ai_-_deepseek-coder-33b-instruct-gguf/blob/main/deepseek-coder-33b-instruct.Q4_K.gguf) | Q4_K | 18.57GB |
| [deepseek-coder-33b-instruct.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/deepseek-ai_-_deepseek-coder-33b-instruct-gguf/blob/main/deepseek-coder-33b-instruct.Q4_K_M.gguf) | Q4_K_M | 18.57GB |
| [deepseek-coder-33b-instruct.Q4_1.gguf](https://huggingface.co/RichardErkhov/deepseek-ai_-_deepseek-coder-33b-instruct-gguf/blob/main/deepseek-coder-33b-instruct.Q4_1.gguf) | Q4_1 | 19.45GB |
| [deepseek-coder-33b-instruct.Q5_0.gguf](https://huggingface.co/RichardErkhov/deepseek-ai_-_deepseek-coder-33b-instruct-gguf/blob/main/deepseek-coder-33b-instruct.Q5_0.gguf) | Q5_0 | 21.38GB |
| [deepseek-coder-33b-instruct.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/deepseek-ai_-_deepseek-coder-33b-instruct-gguf/blob/main/deepseek-coder-33b-instruct.Q5_K_S.gguf) | Q5_K_S | 21.38GB |
| [deepseek-coder-33b-instruct.Q5_K.gguf](https://huggingface.co/RichardErkhov/deepseek-ai_-_deepseek-coder-33b-instruct-gguf/blob/main/deepseek-coder-33b-instruct.Q5_K.gguf) | Q5_K | 21.92GB |
| [deepseek-coder-33b-instruct.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/deepseek-ai_-_deepseek-coder-33b-instruct-gguf/blob/main/deepseek-coder-33b-instruct.Q5_K_M.gguf) | Q5_K_M | 21.92GB |
| [deepseek-coder-33b-instruct.Q5_1.gguf](https://huggingface.co/RichardErkhov/deepseek-ai_-_deepseek-coder-33b-instruct-gguf/blob/main/deepseek-coder-33b-instruct.Q5_1.gguf) | Q5_1 | 23.31GB |
| [deepseek-coder-33b-instruct.Q6_K.gguf](https://huggingface.co/RichardErkhov/deepseek-ai_-_deepseek-coder-33b-instruct-gguf/blob/main/deepseek-coder-33b-instruct.Q6_K.gguf) | Q6_K | 25.48GB |
Original model description:
---
license: other
license_name: deepseek
license_link: LICENSE
---
<p align="center">
<img width="1000px" alt="DeepSeek Coder" src="https://github.com/deepseek-ai/DeepSeek-Coder/blob/main/pictures/logo.png?raw=true">
</p>
<p align="center"><a href="https://www.deepseek.com/">[🏠Homepage]</a> | <a href="https://coder.deepseek.com/">[🤖 Chat with DeepSeek Coder]</a> | <a href="https://discord.gg/Tc7c45Zzu5">[Discord]</a> | <a href="https://github.com/guoday/assert/blob/main/QR.png?raw=true">[Wechat(微信)]</a> </p>
<hr>
### 1. Introduction of Deepseek Coder
Deepseek Coder is composed of a series of code language models, each trained from scratch on 2T tokens, with a composition of 87% code and 13% natural language in both English and Chinese. We provide various sizes of the code model, ranging from 1B to 33B versions. Each model is pre-trained on project-level code corpus by employing a window size of 16K and a extra fill-in-the-blank task, to support project-level code completion and infilling. For coding capabilities, Deepseek Coder achieves state-of-the-art performance among open-source code models on multiple programming languages and various benchmarks.
- **Massive Training Data**: Trained from scratch on 2T tokens, including 87% code and 13% linguistic data in both English and Chinese languages.
- **Highly Flexible & Scalable**: Offered in model sizes of 1.3B, 5.7B, 6.7B, and 33B, enabling users to choose the setup most suitable for their requirements.
- **Superior Model Performance**: State-of-the-art performance among publicly available code models on HumanEval, MultiPL-E, MBPP, DS-1000, and APPS benchmarks.
- **Advanced Code Completion Capabilities**: A window size of 16K and a fill-in-the-blank task, supporting project-level code completion and infilling tasks.
### 2. Model Summary
deepseek-coder-33b-instruct is a 33B parameter model initialized from deepseek-coder-33b-base and fine-tuned on 2B tokens of instruction data.
- **Home Page:** [DeepSeek](https://deepseek.com/)
- **Repository:** [deepseek-ai/deepseek-coder](https://github.com/deepseek-ai/deepseek-coder)
- **Chat With DeepSeek Coder:** [DeepSeek-Coder](https://coder.deepseek.com/)
### 3. How to Use
Here give some examples of how to use our model.
#### Chat Model Inference
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("deepseek-ai/deepseek-coder-6.7b-instruct", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("deepseek-ai/deepseek-coder-6.7b-instruct", trust_remote_code=True, torch_dtype=torch.bfloat16).cuda()
messages=[
{ 'role': 'user', 'content': "write a quick sort algorithm in python."}
]
inputs = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt").to(model.device)
# tokenizer.eos_token_id is the id of <|EOT|> token
outputs = model.generate(inputs, max_new_tokens=512, do_sample=False, top_k=50, top_p=0.95, num_return_sequences=1, eos_token_id=tokenizer.eos_token_id)
print(tokenizer.decode(outputs[0][len(inputs[0]):], skip_special_tokens=True))
```
### 4. License
This code repository is licensed under the MIT License. The use of DeepSeek Coder models is subject to the Model License. DeepSeek Coder supports commercial use.
See the [LICENSE-MODEL](https://github.com/deepseek-ai/deepseek-coder/blob/main/LICENSE-MODEL) for more details.
### 5. Contact
If you have any questions, please raise an issue or contact us at [[email protected]](mailto:[email protected]).
| {} | RichardErkhov/deepseek-ai_-_deepseek-coder-33b-instruct-gguf | null | [
"gguf",
"region:us"
] | null | 2024-05-02T13:28:19+00:00 |
null | null | {} | squaadinc/1714656520182x365953636858069000 | null | [
"region:us"
] | null | 2024-05-02T13:28:43+00:00 |
|
sentence-similarity | sentence-transformers |
# SentenceTransformer based on google-bert/bert-base-uncased
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased) on the [all-nli](https://huggingface.co/datasets/sentence-transformers/all-nli) and [sts](https://huggingface.co/datasets/sentence-transformers/stsb) datasets. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased) <!-- at revision 86b5e0934494bd15c9632b12f734a8a67f723594 -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 tokens
- **Similarity Function:** Cosine Similarity
- **Training Datasets:**
- [all-nli](https://huggingface.co/datasets/sentence-transformers/all-nli)
- [sts](https://huggingface.co/datasets/sentence-transformers/stsb)
- **Language:** en
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("tomaarsen/bert-base-uncased-multi-task")
# Run inference
sentences = [
'the guy is paid',
'A man is receiving a contract.',
'A man is racing on his bike.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Semantic Similarity
* Dataset: `sts-dev`
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| pearson_cosine | 0.8288 |
| **spearman_cosine** | **0.8351** |
| pearson_manhattan | 0.7968 |
| spearman_manhattan | 0.8041 |
| pearson_euclidean | 0.7968 |
| spearman_euclidean | 0.8039 |
| pearson_dot | 0.7572 |
| spearman_dot | 0.7697 |
| pearson_max | 0.8288 |
| spearman_max | 0.8351 |
#### Semantic Similarity
* Dataset: `sts-test`
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| pearson_cosine | 0.8014 |
| **spearman_cosine** | **0.8049** |
| pearson_manhattan | 0.7935 |
| spearman_manhattan | 0.7935 |
| pearson_euclidean | 0.794 |
| spearman_euclidean | 0.7943 |
| pearson_dot | 0.6989 |
| spearman_dot | 0.6967 |
| pearson_max | 0.8014 |
| spearman_max | 0.8049 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Datasets
#### all-nli
* Dataset: [all-nli](https://huggingface.co/datasets/sentence-transformers/all-nli) at [cc6c526](https://huggingface.co/datasets/sentence-transformers/all-nli/tree/cc6c526380e29912b5c6fa03682da4daf773c013)
* Size: 942,069 training samples
* Columns: <code>premise</code>, <code>hypothesis</code>, and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | premise | hypothesis | label |
|:--------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:-------------------------------------------------------------------|
| type | string | string | int |
| details | <ul><li>min: 6 tokens</li><li>mean: 17.38 tokens</li><li>max: 52 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 10.7 tokens</li><li>max: 31 tokens</li></ul> | <ul><li>0: ~33.40%</li><li>1: ~33.30%</li><li>2: ~33.30%</li></ul> |
* Samples:
| premise | hypothesis | label |
|:--------------------------------------------------------------------|:---------------------------------------------------------------|:---------------|
| <code>A person on a horse jumps over a broken down airplane.</code> | <code>A person is training his horse for a competition.</code> | <code>1</code> |
| <code>A person on a horse jumps over a broken down airplane.</code> | <code>A person is at a diner, ordering an omelette.</code> | <code>2</code> |
| <code>A person on a horse jumps over a broken down airplane.</code> | <code>A person is outdoors, on a horse.</code> | <code>0</code> |
* Loss: [<code>SoftmaxLoss</code>](https://sbert.net/docs/package_reference/losses.html#softmaxloss)
#### sts
* Dataset: [sts](https://huggingface.co/datasets/sentence-transformers/stsb) at [ab7a5ac](https://huggingface.co/datasets/sentence-transformers/stsb/tree/ab7a5ac0e35aa22088bdcf23e7fd99b220e53308)
* Size: 5,749 training samples
* Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code>
* Approximate statistics based on the first 1000 samples:
| | sentence1 | sentence2 | score |
|:--------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 6 tokens</li><li>mean: 10.0 tokens</li><li>max: 28 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 9.95 tokens</li><li>max: 25 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.54</li><li>max: 1.0</li></ul> |
* Samples:
| sentence1 | sentence2 | score |
|:-----------------------------------------------------------|:----------------------------------------------------------------------|:------------------|
| <code>A plane is taking off.</code> | <code>An air plane is taking off.</code> | <code>1.0</code> |
| <code>A man is playing a large flute.</code> | <code>A man is playing a flute.</code> | <code>0.76</code> |
| <code>A man is spreading shreded cheese on a pizza.</code> | <code>A man is spreading shredded cheese on an uncooked pizza.</code> | <code>0.76</code> |
* Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/losses.html#cosinesimilarityloss) with these parameters:
```json
{
"loss_fct": "torch.nn.modules.loss.MSELoss"
}
```
### Evaluation Datasets
#### all-nli
* Dataset: [all-nli](https://huggingface.co/datasets/sentence-transformers/all-nli) at [cc6c526](https://huggingface.co/datasets/sentence-transformers/all-nli/tree/cc6c526380e29912b5c6fa03682da4daf773c013)
* Size: 1,000 evaluation samples
* Columns: <code>premise</code>, <code>hypothesis</code>, and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | premise | hypothesis | label |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------|
| type | string | string | int |
| details | <ul><li>min: 6 tokens</li><li>mean: 18.44 tokens</li><li>max: 57 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 10.57 tokens</li><li>max: 25 tokens</li></ul> | <ul><li>0: ~33.10%</li><li>1: ~33.30%</li><li>2: ~33.60%</li></ul> |
* Samples:
| premise | hypothesis | label |
|:-------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------|:---------------|
| <code>Two women are embracing while holding to go packages.</code> | <code>The sisters are hugging goodbye while holding to go packages after just eating lunch.</code> | <code>1</code> |
| <code>Two women are embracing while holding to go packages.</code> | <code>Two woman are holding packages.</code> | <code>0</code> |
| <code>Two women are embracing while holding to go packages.</code> | <code>The men are fighting outside a deli.</code> | <code>2</code> |
* Loss: [<code>SoftmaxLoss</code>](https://sbert.net/docs/package_reference/losses.html#softmaxloss)
#### sts
* Dataset: [sts](https://huggingface.co/datasets/sentence-transformers/stsb) at [ab7a5ac](https://huggingface.co/datasets/sentence-transformers/stsb/tree/ab7a5ac0e35aa22088bdcf23e7fd99b220e53308)
* Size: 1,500 evaluation samples
* Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code>
* Approximate statistics based on the first 1000 samples:
| | sentence1 | sentence2 | score |
|:--------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 5 tokens</li><li>mean: 15.1 tokens</li><li>max: 45 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 15.11 tokens</li><li>max: 53 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.47</li><li>max: 1.0</li></ul> |
* Samples:
| sentence1 | sentence2 | score |
|:--------------------------------------------------|:------------------------------------------------------|:------------------|
| <code>A man with a hard hat is dancing.</code> | <code>A man wearing a hard hat is dancing.</code> | <code>1.0</code> |
| <code>A young child is riding a horse.</code> | <code>A child is riding a horse.</code> | <code>0.95</code> |
| <code>A man is feeding a mouse to a snake.</code> | <code>The man is feeding a mouse to the snake.</code> | <code>1.0</code> |
* Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/losses.html#cosinesimilarityloss) with these parameters:
```json
{
"loss_fct": "torch.nn.modules.loss.MSELoss"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `num_train_epochs`: 1
- `warmup_ratio`: 0.1
- `fp16`: True
- `multi_dataset_batch_sampler`: round_robin
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: False
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: None
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: round_robin
</details>
### Training Logs
| Epoch | Step | Training Loss | sts loss | all-nli loss | sts-dev_spearman_cosine | sts-test_spearman_cosine |
|:------:|:----:|:-------------:|:--------:|:------------:|:-----------------------:|:------------------------:|
| 0.1389 | 100 | 0.5961 | 0.0470 | 1.1005 | 0.8096 | - |
| 0.2778 | 200 | 0.5408 | 0.0354 | 0.9687 | 0.8229 | - |
| 0.4167 | 300 | 0.5185 | 0.0373 | 0.9398 | 0.8265 | - |
| 0.5556 | 400 | 0.4978 | 0.0368 | 0.9304 | 0.8200 | - |
| 0.6944 | 500 | 0.5026 | 0.0347 | 0.9044 | 0.8234 | - |
| 0.8333 | 600 | 0.4702 | 0.0326 | 0.8727 | 0.8300 | - |
| 0.9722 | 700 | 0.4649 | 0.0328 | 0.8723 | 0.8351 | - |
| 1.0 | 720 | - | - | - | - | 0.8049 |
### Environmental Impact
Carbon emissions were measured using [CodeCarbon](https://github.com/mlco2/codecarbon).
- **Energy Consumed**: 0.017 kWh
- **Carbon Emitted**: 0.006 kg of CO2
- **Hours Used**: 0.097 hours
### Training Hardware
- **On Cloud**: No
- **GPU Model**: 1 x NVIDIA GeForce RTX 3090
- **CPU Model**: 13th Gen Intel(R) Core(TM) i7-13700K
- **RAM Size**: 31.78 GB
### Framework Versions
- Python: 3.11.6
- Sentence Transformers: 3.0.0.dev0
- Transformers: 4.41.0.dev0
- PyTorch: 2.3.0+cu121
- Accelerate: 0.26.1
- Datasets: 2.18.0
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers and SoftmaxLoss
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> | {"language": ["en"], "library_name": "sentence-transformers", "tags": ["sentence-transformers", "sentence-similarity", "feature-extraction", "loss:SoftmaxLoss", "loss:CosineSimilarityLoss"], "metrics": ["pearson_cosine", "spearman_cosine", "pearson_manhattan", "spearman_manhattan", "pearson_euclidean", "spearman_euclidean", "pearson_dot", "spearman_dot", "pearson_max", "spearman_max"], "base_model": "google-bert/bert-base-uncased", "widget": [{"source_sentence": "the guy is dead", "sentences": ["The dog is dead.", "Men are sitting in the park.", "People are outside."]}, {"source_sentence": "Women are running.", "sentences": ["Two women are running.", "A animated airplane is landing.", "The man sang and played his guitar."]}, {"source_sentence": "The gate is yellow.", "sentences": ["The gate is blue.", "The cook is kneading the flour.", "A woman puts flour on a piece of meat."]}, {"source_sentence": "A parrot is talking.", "sentences": ["A man is singing.", "Two men are standing in a room.", "Three dogs playing in the snow."]}, {"source_sentence": "the guy is paid", "sentences": ["A man is receiving a contract.", "A man is racing on his bike.", "a dog chases a cat"]}], "pipeline_tag": "sentence-similarity", "co2_eq_emissions": {"emissions": 6.489379533908795, "energy_consumed": 0.01669499908389665, "source": "codecarbon", "training_type": "fine-tuning", "on_cloud": false, "cpu_model": "13th Gen Intel(R) Core(TM) i7-13700K", "ram_total_size": 31.777088165283203, "hours_used": 0.097, "hardware_used": "1 x NVIDIA GeForce RTX 3090"}, "model-index": [{"name": "SentenceTransformer based on google-bert/bert-base-uncased", "results": [{"task": {"type": "semantic-similarity", "name": "Semantic Similarity"}, "dataset": {"name": "sts dev", "type": "sts-dev"}, "metrics": [{"type": "pearson_cosine", "value": 0.8287682657838144, "name": "Pearson Cosine"}, {"type": "spearman_cosine", "value": 0.8350670289838767, "name": "Spearman Cosine"}, {"type": "pearson_manhattan", "value": 0.796834648877542, "name": "Pearson Manhattan"}, {"type": "spearman_manhattan", "value": 0.8041000103101458, "name": "Spearman Manhattan"}, {"type": "pearson_euclidean", "value": 0.7968015917572032, "name": "Pearson Euclidean"}, {"type": "spearman_euclidean", "value": 0.803879972820206, "name": "Spearman Euclidean"}, {"type": "pearson_dot", "value": 0.7572392072098838, "name": "Pearson Dot"}, {"type": "spearman_dot", "value": 0.7696731029709327, "name": "Spearman Dot"}, {"type": "pearson_max", "value": 0.8287682657838144, "name": "Pearson Max"}, {"type": "spearman_max", "value": 0.8350670289838767, "name": "Spearman Max"}]}, {"task": {"type": "semantic-similarity", "name": "Semantic Similarity"}, "dataset": {"name": "sts test", "type": "sts-test"}, "metrics": [{"type": "pearson_cosine", "value": 0.8014245911006761, "name": "Pearson Cosine"}, {"type": "spearman_cosine", "value": 0.8049359058371248, "name": "Spearman Cosine"}, {"type": "pearson_manhattan", "value": 0.7934883900951029, "name": "Pearson Manhattan"}, {"type": "spearman_manhattan", "value": 0.793480619733962, "name": "Spearman Manhattan"}, {"type": "pearson_euclidean", "value": 0.7940198430253176, "name": "Pearson Euclidean"}, {"type": "spearman_euclidean", "value": 0.7942686805824551, "name": "Spearman Euclidean"}, {"type": "pearson_dot", "value": 0.698878713916111, "name": "Pearson Dot"}, {"type": "spearman_dot", "value": 0.6967434595564439, "name": "Spearman Dot"}, {"type": "pearson_max", "value": 0.8014245911006761, "name": "Pearson Max"}, {"type": "spearman_max", "value": 0.8049359058371248, "name": "Spearman Max"}]}]}]} | tomaarsen/bert-base-uncased-multi-task | null | [
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"loss:SoftmaxLoss",
"loss:CosineSimilarityLoss",
"en",
"arxiv:1908.10084",
"base_model:google-bert/bert-base-uncased",
"model-index",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T13:30:18+00:00 |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | mani-a-i/llama3_1500_ckpt | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T13:30:29+00:00 |
text-generation | transformers |
Self trained microscopit Mistral. Around 810M parameters.
The tokenizer is the one from https://huggingface.co/mistralai/Mistral-7B-v0.1.
It is being trained on around 400B tokens and this is step 3k.
The evaluation is being conducted now.
## License
This model is available under the Apache 2.0 License.
## Discord Server
Join our Discord server [here](https://discord.gg/xhcBDEM3).
## Feeling Generous? 😊
Eager to buy me a cup of 2$ coffe or iced tea?🍵☕ Sure, here is the link: [https://ko-fi.com/drnicefellow](https://ko-fi.com/drnicefellow). Please add a note on which one you want me to drink?
| {"license": "apache-2.0"} | DrNicefellow/Microscopic-Mistral-3k-steps | null | [
"transformers",
"pytorch",
"mistral",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-02T13:30:30+00:00 |
null | transformers |
# Uploaded model
- **Developed by:** tingting
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "trl"], "base_model": "unsloth/mistral-7b-bnb-4bit"} | tingting/mistral7b_lora_model_balanced_Data_300 | null | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:unsloth/mistral-7b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T13:30:55+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-qwantz-coherent
This model is a fine-tuned version of [google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6861
- Accuracy: 0.8240
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4695 | 1.0 | 339 | 0.4547 | 0.7956 |
| 0.2521 | 2.0 | 678 | 0.4364 | 0.8131 |
| 0.0627 | 3.0 | 1017 | 0.6861 | 0.8240 |
```
Can save 90% of coherent strings by discarding 80% of dp strings (cutoff is 57.403409481048584)
Can save 95% of coherent strings by discarding 63% of dp strings (cutoff is -83.01011323928833)
Can save 98% of coherent strings by discarding 44% of dp strings (cutoff is -97.15004563331604)
Can save 99% of coherent strings by discarding 33% of dp strings (cutoff is -98.31664562225342)
```
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "google-bert/bert-base-uncased", "model-index": [{"name": "bert-qwantz-coherent", "results": []}]} | paul-stansifer/bert-qwantz-coherent | null | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T13:31:12+00:00 |
null | transformers |
# Uploaded model
- **Developed by:** tingting
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "trl"], "base_model": "unsloth/mistral-7b-bnb-4bit"} | tingting/mistral7b_lora_model_balanced_Data_400 | null | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:unsloth/mistral-7b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T13:32:33+00:00 |
null | null | {} | castleinthejin/lora-trained-xl | null | [
"region:us"
] | null | 2024-05-02T13:33:39+00:00 |
|
null | null | {} | shreyasgrampurohit/sd-pokemon-model | null | [
"region:us"
] | null | 2024-05-02T13:33:52+00:00 |
|
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# outputs
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2
- training_steps: 10
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.36.2
- Pytorch 2.2.1+cu121
- Datasets 2.16.0
- Tokenizers 0.15.2 | {"license": "apache-2.0", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "mistralai/Mistral-7B-v0.1", "model-index": [{"name": "outputs", "results": []}]} | alex17cmbs/outputs | null | [
"peft",
"tensorboard",
"safetensors",
"transformer",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"region:us"
] | null | 2024-05-02T13:35:33+00:00 |
null | null | {} | alexisxiaoyu/xlm-roberta-base-finetuned-panx-de-fr | null | [
"region:us"
] | null | 2024-05-02T13:36:13+00:00 |
|
text-classification | transformers | {} | magnoliaparks/roberta-base_r | null | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T13:36:49+00:00 |
|
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | mani-a-i/llama3_prvlaw_1500 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | null | 2024-05-02T13:37:20+00:00 |
null | null |
# tokyotech-llm-Swallow-MS-7b-instruct-v0.1-gguf
[tokyotech-llmさんが公開しているSwallow-MS-7b-instruct-v0.1](https://huggingface.co/tokyotech-llm/Swallow-MS-7b-instruct-v0.1)のggufフォーマット変換版です。
imatrixのデータは[TFMC/imatrix-dataset-for-japanese-llm](https://huggingface.co/datasets/TFMC/imatrix-dataset-for-japanese-llm)を使用して作成しました。
## 他のモデル
mistral
[mmnga/tokyotech-llm-Swallow-MS-7b-instruct-v0.1-gguf](https://huggingface.co/mmnga/tokyotech-llm-Swallow-MS-7b-instruct-v0.1-gguf)
[mmnga/tokyotech-llm-Swallow-7b-plus-hf-gguf](https://huggingface.co/mmnga/tokyotech-llm-Swallow-7b-plus-hf-gguf)
[mmnga/tokyotech-llm-Swallow-MS-7b-v0.1-gguf](https://huggingface.co/mmnga/tokyotech-llm-Swallow-MS-7b-v0.1-gguf)
[mmnga/tokyotech-llm-Swallow-MX-8x7b-NVE-v0.1-gguf](https://huggingface.co/mmnga/tokyotech-llm-Swallow-MX-8x7b-NVE-v0.1-gguf)
llama2
[mmnga/tokyotech-llm-Swallow-7b-instruct-v0.1-gguf](https://huggingface.co/mmnga/tokyotech-llm-Swallow-7b-instruct-v0.1-gguf)
[mmnga/tokyotech-llm-Swallow-13b-instruct-v0.1-gguf](https://huggingface.co/mmnga/tokyotech-llm-Swallow-13b-instruct-v0.1-gguf)
[mmnga/tokyotech-llm-Swallow-70b-instruct-v0.1-gguf](https://huggingface.co/mmnga/tokyotech-llm-Swallow-70b-instruct-v0.1-gguf)
## Usage
```
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
make -j
./main -m 'tokyotech-llm-Swallow-MS-7b-instruct-v0.1-Q4_0.gguf' -n 128 -p '[INST] 今晩の夕食の レシピを教えて [/INST] '
``` | {"language": ["en", "ja"], "license": "apache-2.0", "tags": ["mistral"], "datasets": ["TFMC/imatrix-dataset-for-japanese-llm"]} | mmnga/tokyotech-llm-Swallow-MS-7b-instruct-v0.1-gguf | null | [
"gguf",
"mistral",
"en",
"ja",
"dataset:TFMC/imatrix-dataset-for-japanese-llm",
"license:apache-2.0",
"region:us"
] | null | 2024-05-02T13:37:22+00:00 |
text-classification | transformers |
# Model Trained Using AutoTrain
- Problem type: Text Classification
## Validation Metrics
loss: 0.4838660955429077
f1_macro: 0.762273830650919
f1_micro: 0.7968253968253968
f1_weighted: 0.7910936557475937
precision_macro: 0.8108958879749956
precision_micro: 0.7968253968253968
precision_weighted: 0.79479940517321
recall_macro: 0.728675645342312
recall_micro: 0.7968253968253968
recall_weighted: 0.7968253968253968
accuracy: 0.7968253968253968
| {"tags": ["autotrain", "text-classification"], "datasets": ["V16/autotrain-data"], "widget": [{"text": "I love AutoTrain"}]} | Zerithas/V16 | null | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"autotrain",
"dataset:V16/autotrain-data",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T13:38:13+00:00 |
null | transformers |
# Uploaded model
- **Developed by:** tingting
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "trl"], "base_model": "unsloth/mistral-7b-bnb-4bit"} | tingting/mistral7b_lora_model_balanced_Data_500 | null | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:unsloth/mistral-7b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T13:39:15+00:00 |
feature-extraction | transformers | # fine-tuned/jina-embeddings-v2-base-en-02052024-pmvv-webapp_8647177611
## Model Description
fine-tuned/jina-embeddings-v2-base-en-02052024-pmvv-webapp_8647177611 is a fine-tuned version of jinaai/jina-embeddings-v2-base-en designed for a specific domain.
## Use Case
This model is designed to support various applications in natural language processing and understanding.
## Associated Dataset
This the dataset for this model can be found [**here**](https://huggingface.co/datasets/fine-tuned/fine-tuned/jina-embeddings-v2-base-en-02052024-pmvv-webapp_8647177611).
## How to Use
This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started:
```python
from transformers import AutoModel, AutoTokenizer
llm_name = "fine-tuned/jina-embeddings-v2-base-en-02052024-pmvv-webapp_8647177611"
tokenizer = AutoTokenizer.from_pretrained(llm_name)
model = AutoModel.from_pretrained(llm_name, trust_remote_code=True)
tokens = tokenizer("Your text here", return_tensors="pt")
embedding = model(**tokens)
```
| {} | fine-tuned/jina-embeddings-v2-base-en-02052024-pmvv-webapp_8647177611 | null | [
"transformers",
"safetensors",
"bert",
"feature-extraction",
"custom_code",
"region:us"
] | null | 2024-05-02T13:39:15+00:00 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.