modelId
stringlengths 5
122
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC] | downloads
int64 0
738M
| likes
int64 0
11k
| library_name
stringclasses 245
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 48
values | createdAt
timestamp[us, tz=UTC] | card
stringlengths 1
901k
|
---|---|---|---|---|---|---|---|---|---|
baxtos/bartik04-3 | baxtos | 2024-07-02T06:36:51Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-07-02T06:34:26Z | Entry not found |
YongjieNiu/prior-2Relu-adl-cat-1-500 | YongjieNiu | 2024-07-02T10:32:29Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:SDXL_model",
"license:openrail++",
"region:us"
] | text-to-image | 2024-07-02T06:34:54Z | ---
license: openrail++
library_name: diffusers
tags:
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
base_model: SDXL_model
instance_prompt: a photo of adl cat
widget:
- text: a photo of adl cat by the sea
output:
url: image_0.png
- text: a photo of adl cat by the sea
output:
url: image_1.png
- text: a photo of adl cat by the sea
output:
url: image_2.png
- text: a photo of adl cat by the sea
output:
url: image_3.png
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - YongjieNiu/prior-2Relu-adl-cat-1-500
<Gallery />
## Model description
These are YongjieNiu/prior-2Relu-adl-cat-1-500 LoRA adaption weights for SDXL_model.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: VAE.
## Trigger words
You should use a photo of adl cat to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](YongjieNiu/prior-2Relu-adl-cat-1-500/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
RichardErkhov/MayaPH_-_GodziLLa2-70B-gguf | RichardErkhov | 2024-07-03T00:41:12Z | 0 | 0 | null | [
"gguf",
"region:us"
] | null | 2024-07-02T06:35:02Z | Entry not found |
LzSavage/LLama3-70B-DPO_hh-rlhf | LzSavage | 2024-07-02T06:35:17Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-02T06:35:17Z | Entry not found |
wieheistdu/distilbert-base-uncased-finetuned-squad2-ep4-batch16 | wieheistdu | 2024-07-02T07:51:22Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"question-answering",
"generated_from_trainer",
"base_model:distilbert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2024-07-02T06:35:20Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-squad2-ep4-batch16
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad2-ep4-batch16
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5125
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.2658 | 1.0 | 4118 | 1.2433 |
| 1.0043 | 2.0 | 8236 | 1.2286 |
| 0.8315 | 3.0 | 12354 | 1.3488 |
| 0.7225 | 4.0 | 16472 | 1.5125 |
### Framework versions
- Transformers 4.41.2
- Pytorch 1.13.1+cu116
- Datasets 2.19.2
- Tokenizers 0.19.1
|
pandapeng/chinese-llama3-8b-chat | pandapeng | 2024-07-02T06:36:40Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-02T06:36:40Z | Entry not found |
PLASIVIA/whisper-small-dv | PLASIVIA | 2024-07-02T06:37:32Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-02T06:37:32Z | Entry not found |
webognkbhuvan/phi-2-health | webognkbhuvan | 2024-07-02T06:42:32Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"phi-msft",
"text-generation",
"custom_code",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-07-02T06:39:10Z | Entry not found |
baxtos/bartik05-3 | baxtos | 2024-07-02T06:42:14Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-07-02T06:39:55Z | Entry not found |
Sunsun1010/a | Sunsun1010 | 2024-07-02T06:40:09Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-02T06:40:09Z | Entry not found |
QuangHuy46/OCR_HSMT | QuangHuy46 | 2024-07-02T06:40:48Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-02T06:40:48Z | Entry not found |
whizzzzkid/whizzzzkid_395_2 | whizzzzkid | 2024-07-02T06:41:17Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-07-02T06:40:59Z | Entry not found |
houbw/llama38b_ruozhiba_5 | houbw | 2024-07-02T06:42:36Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-Instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-07-02T06:42:29Z | ---
base_model: unsloth/llama-3-8b-Instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** houbw
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
sudhan1998/hu | sudhan1998 | 2024-07-02T06:44:42Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-02T06:44:42Z | Entry not found |
bhadauriaupendra062/my-fine-tuned-model-ppo | bhadauriaupendra062 | 2024-07-02T06:45:05Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-07-02T06:44:48Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
baxtos/bartik06-3 | baxtos | 2024-07-02T06:47:49Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-07-02T06:45:27Z | Entry not found |
mayarmostafa/videomae-base-finetuned-bleeding-exp_2 | mayarmostafa | 2024-07-02T08:32:11Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"videomae",
"video-classification",
"generated_from_trainer",
"base_model:MCG-NJU/videomae-base",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | video-classification | 2024-07-02T06:49:00Z | ---
license: cc-by-nc-4.0
base_model: MCG-NJU/videomae-base
tags:
- generated_from_trainer
model-index:
- name: videomae-base-finetuned-bleeding-exp_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# videomae-base-finetuned-bleeding-exp_2
This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 600
### Framework versions
- Transformers 4.40.2
- Pytorch 1.12.0
- Datasets 2.19.1
- Tokenizers 0.19.1
|
habulaj/240380211698 | habulaj | 2024-07-02T06:49:31Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-02T06:49:23Z | Entry not found |
upendrawappgo/my-fine-tuned-model-ppo | upendrawappgo | 2024-07-02T06:50:14Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-07-02T06:49:58Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
kmpartner/bkcnft-testsr32 | kmpartner | 2024-07-02T06:56:48Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"region:us"
] | null | 2024-07-02T06:50:42Z | Entry not found |
baxtos/bartik07-3 | baxtos | 2024-07-02T06:53:26Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-07-02T06:50:59Z | Entry not found |
teemperor/starcoder2-15b-Q6_K-GGUF | teemperor | 2024-07-02T06:52:07Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"code",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"dataset:bigcode/the-stack-v2-train",
"base_model:bigcode/starcoder2-15b",
"license:bigcode-openrail-m",
"model-index",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-07-02T06:51:14Z | ---
base_model: bigcode/starcoder2-15b
datasets:
- bigcode/the-stack-v2-train
library_name: transformers
license: bigcode-openrail-m
pipeline_tag: text-generation
tags:
- code
- llama-cpp
- gguf-my-repo
inference:
parameters:
temperature: 0.2
top_p: 0.95
widget:
- text: 'def print_hello_world():'
example_title: Hello world
group: Python
model-index:
- name: starcoder2-15b
results:
- task:
type: text-generation
dataset:
name: CruxEval-I
type: cruxeval-i
metrics:
- type: pass@1
value: 48.1
- task:
type: text-generation
dataset:
name: DS-1000
type: ds-1000
metrics:
- type: pass@1
value: 33.8
- task:
type: text-generation
dataset:
name: GSM8K (PAL)
type: gsm8k-pal
metrics:
- type: accuracy
value: 65.1
- task:
type: text-generation
dataset:
name: HumanEval+
type: humanevalplus
metrics:
- type: pass@1
value: 37.8
- task:
type: text-generation
dataset:
name: HumanEval
type: humaneval
metrics:
- type: pass@1
value: 46.3
- task:
type: text-generation
dataset:
name: RepoBench-v1.1
type: repobench-v1.1
metrics:
- type: edit-smiliarity
value: 74.08
---
# teemperor/starcoder2-15b-Q6_K-GGUF
This model was converted to GGUF format from [`bigcode/starcoder2-15b`](https://huggingface.co/bigcode/starcoder2-15b) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/bigcode/starcoder2-15b) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo teemperor/starcoder2-15b-Q6_K-GGUF --hf-file starcoder2-15b-q6_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo teemperor/starcoder2-15b-Q6_K-GGUF --hf-file starcoder2-15b-q6_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo teemperor/starcoder2-15b-Q6_K-GGUF --hf-file starcoder2-15b-q6_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo teemperor/starcoder2-15b-Q6_K-GGUF --hf-file starcoder2-15b-q6_k.gguf -c 2048
```
|
RichardErkhov/RajuKandasamy_-_tamillama_tiny_30m-gguf | RichardErkhov | 2024-07-02T06:52:10Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-02T06:52:10Z | Entry not found |
hiruymet/Chef | hiruymet | 2024-07-02T06:52:35Z | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | 2024-07-02T06:52:35Z | ---
license: mit
---
|
chihlunLee/NoInstruct-small-Embedding-v0-Q4_0-GGUF | chihlunLee | 2024-07-02T06:52:37Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"gguf",
"feature-extraction",
"mteb",
"sentence-similarity",
"transformers",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:avsolatorio/NoInstruct-small-Embedding-v0",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2024-07-02T06:52:35Z | ---
base_model: avsolatorio/NoInstruct-small-Embedding-v0
language:
- en
library_name: sentence-transformers
license: mit
pipeline_tag: sentence-similarity
tags:
- feature-extraction
- mteb
- sentence-similarity
- sentence-transformers
- transformers
- llama-cpp
- gguf-my-repo
model-index:
- name: NoInstruct-small-Embedding-v0
results:
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en)
type: mteb/amazon_counterfactual
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 75.76119402985074
- type: ap
value: 39.03628777559392
- type: f1
value: 69.85860402259618
- task:
type: Classification
dataset:
name: MTEB AmazonPolarityClassification
type: mteb/amazon_polarity
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 93.29920000000001
- type: ap
value: 90.03479490717608
- type: f1
value: 93.28554395248467
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (en)
type: mteb/amazon_reviews_multi
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 49.98799999999999
- type: f1
value: 49.46151232451642
- task:
type: Retrieval
dataset:
name: MTEB ArguAna
type: mteb/arguana
config: default
split: test
revision: c22ab2a51041ffd869aaddef7af8d8215647e41a
metrics:
- type: map_at_1
value: 31.935000000000002
- type: map_at_10
value: 48.791000000000004
- type: map_at_100
value: 49.619
- type: map_at_1000
value: 49.623
- type: map_at_3
value: 44.334
- type: map_at_5
value: 46.908
- type: mrr_at_1
value: 32.93
- type: mrr_at_10
value: 49.158
- type: mrr_at_100
value: 50.00599999999999
- type: mrr_at_1000
value: 50.01
- type: mrr_at_3
value: 44.618
- type: mrr_at_5
value: 47.325
- type: ndcg_at_1
value: 31.935000000000002
- type: ndcg_at_10
value: 57.593
- type: ndcg_at_100
value: 60.841
- type: ndcg_at_1000
value: 60.924
- type: ndcg_at_3
value: 48.416
- type: ndcg_at_5
value: 53.05
- type: precision_at_1
value: 31.935000000000002
- type: precision_at_10
value: 8.549
- type: precision_at_100
value: 0.9900000000000001
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 20.081
- type: precision_at_5
value: 14.296000000000001
- type: recall_at_1
value: 31.935000000000002
- type: recall_at_10
value: 85.491
- type: recall_at_100
value: 99.004
- type: recall_at_1000
value: 99.644
- type: recall_at_3
value: 60.242
- type: recall_at_5
value: 71.479
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringP2P
type: mteb/arxiv-clustering-p2p
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 47.78438534940855
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringS2S
type: mteb/arxiv-clustering-s2s
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 40.12916178519471
- task:
type: Reranking
dataset:
name: MTEB AskUbuntuDupQuestions
type: mteb/askubuntudupquestions-reranking
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 62.125361608299855
- type: mrr
value: 74.92525172580574
- task:
type: STS
dataset:
name: MTEB BIOSSES
type: mteb/biosses-sts
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 88.64322910336641
- type: cos_sim_spearman
value: 87.20138453306345
- type: euclidean_pearson
value: 87.08547818178234
- type: euclidean_spearman
value: 87.17066094143931
- type: manhattan_pearson
value: 87.30053110771618
- type: manhattan_spearman
value: 86.86824441211934
- task:
type: Classification
dataset:
name: MTEB Banking77Classification
type: mteb/banking77
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 86.3961038961039
- type: f1
value: 86.3669961645295
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringP2P
type: mteb/biorxiv-clustering-p2p
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 39.40291404289857
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringS2S
type: mteb/biorxiv-clustering-s2s
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 35.102356817746816
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackAndroidRetrieval
type: mteb/cqadupstack-android
config: default
split: test
revision: f46a197baaae43b4f621051089b82a364682dfeb
metrics:
- type: map_at_1
value: 31.013
- type: map_at_10
value: 42.681999999999995
- type: map_at_100
value: 44.24
- type: map_at_1000
value: 44.372
- type: map_at_3
value: 39.181
- type: map_at_5
value: 41.071999999999996
- type: mrr_at_1
value: 38.196999999999996
- type: mrr_at_10
value: 48.604
- type: mrr_at_100
value: 49.315
- type: mrr_at_1000
value: 49.363
- type: mrr_at_3
value: 45.756
- type: mrr_at_5
value: 47.43
- type: ndcg_at_1
value: 38.196999999999996
- type: ndcg_at_10
value: 49.344
- type: ndcg_at_100
value: 54.662
- type: ndcg_at_1000
value: 56.665
- type: ndcg_at_3
value: 44.146
- type: ndcg_at_5
value: 46.514
- type: precision_at_1
value: 38.196999999999996
- type: precision_at_10
value: 9.571
- type: precision_at_100
value: 1.542
- type: precision_at_1000
value: 0.202
- type: precision_at_3
value: 21.364
- type: precision_at_5
value: 15.336
- type: recall_at_1
value: 31.013
- type: recall_at_10
value: 61.934999999999995
- type: recall_at_100
value: 83.923
- type: recall_at_1000
value: 96.601
- type: recall_at_3
value: 46.86
- type: recall_at_5
value: 53.620000000000005
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackEnglishRetrieval
type: mteb/cqadupstack-english
config: default
split: test
revision: ad9991cb51e31e31e430383c75ffb2885547b5f0
metrics:
- type: map_at_1
value: 29.84
- type: map_at_10
value: 39.335
- type: map_at_100
value: 40.647
- type: map_at_1000
value: 40.778
- type: map_at_3
value: 36.556
- type: map_at_5
value: 38.048
- type: mrr_at_1
value: 36.815
- type: mrr_at_10
value: 45.175
- type: mrr_at_100
value: 45.907
- type: mrr_at_1000
value: 45.946999999999996
- type: mrr_at_3
value: 42.909000000000006
- type: mrr_at_5
value: 44.227
- type: ndcg_at_1
value: 36.815
- type: ndcg_at_10
value: 44.783
- type: ndcg_at_100
value: 49.551
- type: ndcg_at_1000
value: 51.612
- type: ndcg_at_3
value: 40.697
- type: ndcg_at_5
value: 42.558
- type: precision_at_1
value: 36.815
- type: precision_at_10
value: 8.363
- type: precision_at_100
value: 1.385
- type: precision_at_1000
value: 0.186
- type: precision_at_3
value: 19.342000000000002
- type: precision_at_5
value: 13.706999999999999
- type: recall_at_1
value: 29.84
- type: recall_at_10
value: 54.164
- type: recall_at_100
value: 74.36
- type: recall_at_1000
value: 87.484
- type: recall_at_3
value: 42.306
- type: recall_at_5
value: 47.371
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackGamingRetrieval
type: mteb/cqadupstack-gaming
config: default
split: test
revision: 4885aa143210c98657558c04aaf3dc47cfb54340
metrics:
- type: map_at_1
value: 39.231
- type: map_at_10
value: 51.44800000000001
- type: map_at_100
value: 52.574
- type: map_at_1000
value: 52.629999999999995
- type: map_at_3
value: 48.077
- type: map_at_5
value: 50.019000000000005
- type: mrr_at_1
value: 44.89
- type: mrr_at_10
value: 54.803000000000004
- type: mrr_at_100
value: 55.556000000000004
- type: mrr_at_1000
value: 55.584
- type: mrr_at_3
value: 52.32
- type: mrr_at_5
value: 53.846000000000004
- type: ndcg_at_1
value: 44.89
- type: ndcg_at_10
value: 57.228
- type: ndcg_at_100
value: 61.57
- type: ndcg_at_1000
value: 62.613
- type: ndcg_at_3
value: 51.727000000000004
- type: ndcg_at_5
value: 54.496
- type: precision_at_1
value: 44.89
- type: precision_at_10
value: 9.266
- type: precision_at_100
value: 1.2309999999999999
- type: precision_at_1000
value: 0.136
- type: precision_at_3
value: 23.051
- type: precision_at_5
value: 15.987000000000002
- type: recall_at_1
value: 39.231
- type: recall_at_10
value: 70.82000000000001
- type: recall_at_100
value: 89.446
- type: recall_at_1000
value: 96.665
- type: recall_at_3
value: 56.40500000000001
- type: recall_at_5
value: 62.993
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackGisRetrieval
type: mteb/cqadupstack-gis
config: default
split: test
revision: 5003b3064772da1887988e05400cf3806fe491f2
metrics:
- type: map_at_1
value: 25.296000000000003
- type: map_at_10
value: 34.021
- type: map_at_100
value: 35.158
- type: map_at_1000
value: 35.233
- type: map_at_3
value: 31.424999999999997
- type: map_at_5
value: 33.046
- type: mrr_at_1
value: 27.232
- type: mrr_at_10
value: 36.103
- type: mrr_at_100
value: 37.076
- type: mrr_at_1000
value: 37.135
- type: mrr_at_3
value: 33.635
- type: mrr_at_5
value: 35.211
- type: ndcg_at_1
value: 27.232
- type: ndcg_at_10
value: 38.878
- type: ndcg_at_100
value: 44.284
- type: ndcg_at_1000
value: 46.268
- type: ndcg_at_3
value: 33.94
- type: ndcg_at_5
value: 36.687
- type: precision_at_1
value: 27.232
- type: precision_at_10
value: 5.921
- type: precision_at_100
value: 0.907
- type: precision_at_1000
value: 0.11199999999999999
- type: precision_at_3
value: 14.426
- type: precision_at_5
value: 10.215
- type: recall_at_1
value: 25.296000000000003
- type: recall_at_10
value: 51.708
- type: recall_at_100
value: 76.36699999999999
- type: recall_at_1000
value: 91.306
- type: recall_at_3
value: 38.651
- type: recall_at_5
value: 45.201
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackMathematicaRetrieval
type: mteb/cqadupstack-mathematica
config: default
split: test
revision: 90fceea13679c63fe563ded68f3b6f06e50061de
metrics:
- type: map_at_1
value: 16.24
- type: map_at_10
value: 24.696
- type: map_at_100
value: 25.945
- type: map_at_1000
value: 26.069
- type: map_at_3
value: 22.542
- type: map_at_5
value: 23.526
- type: mrr_at_1
value: 20.149
- type: mrr_at_10
value: 29.584
- type: mrr_at_100
value: 30.548
- type: mrr_at_1000
value: 30.618000000000002
- type: mrr_at_3
value: 27.301
- type: mrr_at_5
value: 28.563
- type: ndcg_at_1
value: 20.149
- type: ndcg_at_10
value: 30.029
- type: ndcg_at_100
value: 35.812
- type: ndcg_at_1000
value: 38.755
- type: ndcg_at_3
value: 26.008
- type: ndcg_at_5
value: 27.517000000000003
- type: precision_at_1
value: 20.149
- type: precision_at_10
value: 5.647
- type: precision_at_100
value: 0.968
- type: precision_at_1000
value: 0.136
- type: precision_at_3
value: 12.934999999999999
- type: precision_at_5
value: 8.955
- type: recall_at_1
value: 16.24
- type: recall_at_10
value: 41.464
- type: recall_at_100
value: 66.781
- type: recall_at_1000
value: 87.85300000000001
- type: recall_at_3
value: 29.822
- type: recall_at_5
value: 34.096
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackPhysicsRetrieval
type: mteb/cqadupstack-physics
config: default
split: test
revision: 79531abbd1fb92d06c6d6315a0cbbbf5bb247ea4
metrics:
- type: map_at_1
value: 29.044999999999998
- type: map_at_10
value: 39.568999999999996
- type: map_at_100
value: 40.831
- type: map_at_1000
value: 40.948
- type: map_at_3
value: 36.495
- type: map_at_5
value: 38.21
- type: mrr_at_1
value: 35.611
- type: mrr_at_10
value: 45.175
- type: mrr_at_100
value: 45.974
- type: mrr_at_1000
value: 46.025
- type: mrr_at_3
value: 42.765
- type: mrr_at_5
value: 44.151
- type: ndcg_at_1
value: 35.611
- type: ndcg_at_10
value: 45.556999999999995
- type: ndcg_at_100
value: 50.86000000000001
- type: ndcg_at_1000
value: 52.983000000000004
- type: ndcg_at_3
value: 40.881
- type: ndcg_at_5
value: 43.035000000000004
- type: precision_at_1
value: 35.611
- type: precision_at_10
value: 8.306
- type: precision_at_100
value: 1.276
- type: precision_at_1000
value: 0.165
- type: precision_at_3
value: 19.57
- type: precision_at_5
value: 13.725000000000001
- type: recall_at_1
value: 29.044999999999998
- type: recall_at_10
value: 57.513999999999996
- type: recall_at_100
value: 80.152
- type: recall_at_1000
value: 93.982
- type: recall_at_3
value: 44.121
- type: recall_at_5
value: 50.007000000000005
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackProgrammersRetrieval
type: mteb/cqadupstack-programmers
config: default
split: test
revision: 6184bc1440d2dbc7612be22b50686b8826d22b32
metrics:
- type: map_at_1
value: 22.349
- type: map_at_10
value: 33.434000000000005
- type: map_at_100
value: 34.8
- type: map_at_1000
value: 34.919
- type: map_at_3
value: 30.348000000000003
- type: map_at_5
value: 31.917
- type: mrr_at_1
value: 28.195999999999998
- type: mrr_at_10
value: 38.557
- type: mrr_at_100
value: 39.550999999999995
- type: mrr_at_1000
value: 39.607
- type: mrr_at_3
value: 36.035000000000004
- type: mrr_at_5
value: 37.364999999999995
- type: ndcg_at_1
value: 28.195999999999998
- type: ndcg_at_10
value: 39.656000000000006
- type: ndcg_at_100
value: 45.507999999999996
- type: ndcg_at_1000
value: 47.848
- type: ndcg_at_3
value: 34.609
- type: ndcg_at_5
value: 36.65
- type: precision_at_1
value: 28.195999999999998
- type: precision_at_10
value: 7.534000000000001
- type: precision_at_100
value: 1.217
- type: precision_at_1000
value: 0.158
- type: precision_at_3
value: 17.085
- type: precision_at_5
value: 12.169
- type: recall_at_1
value: 22.349
- type: recall_at_10
value: 53.127
- type: recall_at_100
value: 77.884
- type: recall_at_1000
value: 93.705
- type: recall_at_3
value: 38.611000000000004
- type: recall_at_5
value: 44.182
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackRetrieval
type: mteb/cqadupstack
config: default
split: test
revision: 4ffe81d471b1924886b33c7567bfb200e9eec5c4
metrics:
- type: map_at_1
value: 25.215749999999996
- type: map_at_10
value: 34.332750000000004
- type: map_at_100
value: 35.58683333333333
- type: map_at_1000
value: 35.70458333333333
- type: map_at_3
value: 31.55441666666667
- type: map_at_5
value: 33.100833333333334
- type: mrr_at_1
value: 29.697250000000004
- type: mrr_at_10
value: 38.372249999999994
- type: mrr_at_100
value: 39.26708333333334
- type: mrr_at_1000
value: 39.3265
- type: mrr_at_3
value: 35.946083333333334
- type: mrr_at_5
value: 37.336999999999996
- type: ndcg_at_1
value: 29.697250000000004
- type: ndcg_at_10
value: 39.64575
- type: ndcg_at_100
value: 44.996833333333335
- type: ndcg_at_1000
value: 47.314499999999995
- type: ndcg_at_3
value: 34.93383333333334
- type: ndcg_at_5
value: 37.15291666666667
- type: precision_at_1
value: 29.697250000000004
- type: precision_at_10
value: 6.98825
- type: precision_at_100
value: 1.138
- type: precision_at_1000
value: 0.15283333333333332
- type: precision_at_3
value: 16.115583333333333
- type: precision_at_5
value: 11.460916666666666
- type: recall_at_1
value: 25.215749999999996
- type: recall_at_10
value: 51.261250000000004
- type: recall_at_100
value: 74.67258333333334
- type: recall_at_1000
value: 90.72033333333334
- type: recall_at_3
value: 38.1795
- type: recall_at_5
value: 43.90658333333334
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackStatsRetrieval
type: mteb/cqadupstack-stats
config: default
split: test
revision: 65ac3a16b8e91f9cee4c9828cc7c335575432a2a
metrics:
- type: map_at_1
value: 24.352
- type: map_at_10
value: 30.576999999999998
- type: map_at_100
value: 31.545
- type: map_at_1000
value: 31.642
- type: map_at_3
value: 28.605000000000004
- type: map_at_5
value: 29.828
- type: mrr_at_1
value: 26.994
- type: mrr_at_10
value: 33.151
- type: mrr_at_100
value: 33.973
- type: mrr_at_1000
value: 34.044999999999995
- type: mrr_at_3
value: 31.135
- type: mrr_at_5
value: 32.262
- type: ndcg_at_1
value: 26.994
- type: ndcg_at_10
value: 34.307
- type: ndcg_at_100
value: 39.079
- type: ndcg_at_1000
value: 41.548
- type: ndcg_at_3
value: 30.581000000000003
- type: ndcg_at_5
value: 32.541
- type: precision_at_1
value: 26.994
- type: precision_at_10
value: 5.244999999999999
- type: precision_at_100
value: 0.831
- type: precision_at_1000
value: 0.11100000000000002
- type: precision_at_3
value: 12.781
- type: precision_at_5
value: 9.017999999999999
- type: recall_at_1
value: 24.352
- type: recall_at_10
value: 43.126999999999995
- type: recall_at_100
value: 64.845
- type: recall_at_1000
value: 83.244
- type: recall_at_3
value: 33.308
- type: recall_at_5
value: 37.984
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackTexRetrieval
type: mteb/cqadupstack-tex
config: default
split: test
revision: 46989137a86843e03a6195de44b09deda022eec7
metrics:
- type: map_at_1
value: 16.592000000000002
- type: map_at_10
value: 23.29
- type: map_at_100
value: 24.423000000000002
- type: map_at_1000
value: 24.554000000000002
- type: map_at_3
value: 20.958
- type: map_at_5
value: 22.267
- type: mrr_at_1
value: 20.061999999999998
- type: mrr_at_10
value: 26.973999999999997
- type: mrr_at_100
value: 27.944999999999997
- type: mrr_at_1000
value: 28.023999999999997
- type: mrr_at_3
value: 24.839
- type: mrr_at_5
value: 26.033
- type: ndcg_at_1
value: 20.061999999999998
- type: ndcg_at_10
value: 27.682000000000002
- type: ndcg_at_100
value: 33.196
- type: ndcg_at_1000
value: 36.246
- type: ndcg_at_3
value: 23.559
- type: ndcg_at_5
value: 25.507
- type: precision_at_1
value: 20.061999999999998
- type: precision_at_10
value: 5.086
- type: precision_at_100
value: 0.9249999999999999
- type: precision_at_1000
value: 0.136
- type: precision_at_3
value: 11.046
- type: precision_at_5
value: 8.149000000000001
- type: recall_at_1
value: 16.592000000000002
- type: recall_at_10
value: 37.181999999999995
- type: recall_at_100
value: 62.224999999999994
- type: recall_at_1000
value: 84.072
- type: recall_at_3
value: 25.776
- type: recall_at_5
value: 30.680000000000003
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackUnixRetrieval
type: mteb/cqadupstack-unix
config: default
split: test
revision: 6c6430d3a6d36f8d2a829195bc5dc94d7e063e53
metrics:
- type: map_at_1
value: 26.035999999999998
- type: map_at_10
value: 34.447
- type: map_at_100
value: 35.697
- type: map_at_1000
value: 35.802
- type: map_at_3
value: 31.64
- type: map_at_5
value: 33.056999999999995
- type: mrr_at_1
value: 29.851
- type: mrr_at_10
value: 38.143
- type: mrr_at_100
value: 39.113
- type: mrr_at_1000
value: 39.175
- type: mrr_at_3
value: 35.665
- type: mrr_at_5
value: 36.901
- type: ndcg_at_1
value: 29.851
- type: ndcg_at_10
value: 39.554
- type: ndcg_at_100
value: 45.091
- type: ndcg_at_1000
value: 47.504000000000005
- type: ndcg_at_3
value: 34.414
- type: ndcg_at_5
value: 36.508
- type: precision_at_1
value: 29.851
- type: precision_at_10
value: 6.614000000000001
- type: precision_at_100
value: 1.051
- type: precision_at_1000
value: 0.13699999999999998
- type: precision_at_3
value: 15.329999999999998
- type: precision_at_5
value: 10.671999999999999
- type: recall_at_1
value: 26.035999999999998
- type: recall_at_10
value: 51.396
- type: recall_at_100
value: 75.09
- type: recall_at_1000
value: 91.904
- type: recall_at_3
value: 37.378
- type: recall_at_5
value: 42.69
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackWebmastersRetrieval
type: mteb/cqadupstack-webmasters
config: default
split: test
revision: 160c094312a0e1facb97e55eeddb698c0abe3571
metrics:
- type: map_at_1
value: 23.211000000000002
- type: map_at_10
value: 32.231
- type: map_at_100
value: 33.772999999999996
- type: map_at_1000
value: 33.982
- type: map_at_3
value: 29.128
- type: map_at_5
value: 31.002999999999997
- type: mrr_at_1
value: 27.668
- type: mrr_at_10
value: 36.388
- type: mrr_at_100
value: 37.384
- type: mrr_at_1000
value: 37.44
- type: mrr_at_3
value: 33.762
- type: mrr_at_5
value: 35.234
- type: ndcg_at_1
value: 27.668
- type: ndcg_at_10
value: 38.043
- type: ndcg_at_100
value: 44.21
- type: ndcg_at_1000
value: 46.748
- type: ndcg_at_3
value: 32.981
- type: ndcg_at_5
value: 35.58
- type: precision_at_1
value: 27.668
- type: precision_at_10
value: 7.352
- type: precision_at_100
value: 1.5
- type: precision_at_1000
value: 0.23700000000000002
- type: precision_at_3
value: 15.613
- type: precision_at_5
value: 11.501999999999999
- type: recall_at_1
value: 23.211000000000002
- type: recall_at_10
value: 49.851
- type: recall_at_100
value: 77.596
- type: recall_at_1000
value: 93.683
- type: recall_at_3
value: 35.403
- type: recall_at_5
value: 42.485
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackWordpressRetrieval
type: mteb/cqadupstack-wordpress
config: default
split: test
revision: 4ffe81d471b1924886b33c7567bfb200e9eec5c4
metrics:
- type: map_at_1
value: 19.384
- type: map_at_10
value: 26.262999999999998
- type: map_at_100
value: 27.409
- type: map_at_1000
value: 27.526
- type: map_at_3
value: 23.698
- type: map_at_5
value: 25.217
- type: mrr_at_1
value: 20.702
- type: mrr_at_10
value: 27.810000000000002
- type: mrr_at_100
value: 28.863
- type: mrr_at_1000
value: 28.955
- type: mrr_at_3
value: 25.230999999999998
- type: mrr_at_5
value: 26.821
- type: ndcg_at_1
value: 20.702
- type: ndcg_at_10
value: 30.688
- type: ndcg_at_100
value: 36.138999999999996
- type: ndcg_at_1000
value: 38.984
- type: ndcg_at_3
value: 25.663000000000004
- type: ndcg_at_5
value: 28.242
- type: precision_at_1
value: 20.702
- type: precision_at_10
value: 4.954
- type: precision_at_100
value: 0.823
- type: precision_at_1000
value: 0.11800000000000001
- type: precision_at_3
value: 10.844
- type: precision_at_5
value: 8.096
- type: recall_at_1
value: 19.384
- type: recall_at_10
value: 42.847
- type: recall_at_100
value: 67.402
- type: recall_at_1000
value: 88.145
- type: recall_at_3
value: 29.513
- type: recall_at_5
value: 35.57
- task:
type: Retrieval
dataset:
name: MTEB ClimateFEVER
type: mteb/climate-fever
config: default
split: test
revision: 47f2ac6acb640fc46020b02a5b59fdda04d39380
metrics:
- type: map_at_1
value: 14.915000000000001
- type: map_at_10
value: 25.846999999999998
- type: map_at_100
value: 27.741
- type: map_at_1000
value: 27.921000000000003
- type: map_at_3
value: 21.718
- type: map_at_5
value: 23.948
- type: mrr_at_1
value: 33.941
- type: mrr_at_10
value: 46.897
- type: mrr_at_100
value: 47.63
- type: mrr_at_1000
value: 47.658
- type: mrr_at_3
value: 43.919999999999995
- type: mrr_at_5
value: 45.783
- type: ndcg_at_1
value: 33.941
- type: ndcg_at_10
value: 35.202
- type: ndcg_at_100
value: 42.132
- type: ndcg_at_1000
value: 45.190999999999995
- type: ndcg_at_3
value: 29.68
- type: ndcg_at_5
value: 31.631999999999998
- type: precision_at_1
value: 33.941
- type: precision_at_10
value: 10.906
- type: precision_at_100
value: 1.8339999999999999
- type: precision_at_1000
value: 0.241
- type: precision_at_3
value: 22.606
- type: precision_at_5
value: 17.081
- type: recall_at_1
value: 14.915000000000001
- type: recall_at_10
value: 40.737
- type: recall_at_100
value: 64.42
- type: recall_at_1000
value: 81.435
- type: recall_at_3
value: 26.767000000000003
- type: recall_at_5
value: 32.895
- task:
type: Retrieval
dataset:
name: MTEB DBPedia
type: mteb/dbpedia
config: default
split: test
revision: c0f706b76e590d620bd6618b3ca8efdd34e2d659
metrics:
- type: map_at_1
value: 8.665000000000001
- type: map_at_10
value: 19.087
- type: map_at_100
value: 26.555
- type: map_at_1000
value: 28.105999999999998
- type: map_at_3
value: 13.858999999999998
- type: map_at_5
value: 16.083
- type: mrr_at_1
value: 68.5
- type: mrr_at_10
value: 76.725
- type: mrr_at_100
value: 76.974
- type: mrr_at_1000
value: 76.981
- type: mrr_at_3
value: 75.583
- type: mrr_at_5
value: 76.208
- type: ndcg_at_1
value: 55.875
- type: ndcg_at_10
value: 41.018
- type: ndcg_at_100
value: 44.982
- type: ndcg_at_1000
value: 52.43
- type: ndcg_at_3
value: 46.534
- type: ndcg_at_5
value: 43.083
- type: precision_at_1
value: 68.5
- type: precision_at_10
value: 32.35
- type: precision_at_100
value: 10.078
- type: precision_at_1000
value: 1.957
- type: precision_at_3
value: 50.083
- type: precision_at_5
value: 41.3
- type: recall_at_1
value: 8.665000000000001
- type: recall_at_10
value: 24.596999999999998
- type: recall_at_100
value: 50.612
- type: recall_at_1000
value: 74.24
- type: recall_at_3
value: 15.337
- type: recall_at_5
value: 18.796
- task:
type: Classification
dataset:
name: MTEB EmotionClassification
type: mteb/emotion
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 55.06500000000001
- type: f1
value: 49.827367590822035
- task:
type: Retrieval
dataset:
name: MTEB FEVER
type: mteb/fever
config: default
split: test
revision: bea83ef9e8fb933d90a2f1d5515737465d613e12
metrics:
- type: map_at_1
value: 76.059
- type: map_at_10
value: 83.625
- type: map_at_100
value: 83.845
- type: map_at_1000
value: 83.858
- type: map_at_3
value: 82.67099999999999
- type: map_at_5
value: 83.223
- type: mrr_at_1
value: 82.013
- type: mrr_at_10
value: 88.44800000000001
- type: mrr_at_100
value: 88.535
- type: mrr_at_1000
value: 88.537
- type: mrr_at_3
value: 87.854
- type: mrr_at_5
value: 88.221
- type: ndcg_at_1
value: 82.013
- type: ndcg_at_10
value: 87.128
- type: ndcg_at_100
value: 87.922
- type: ndcg_at_1000
value: 88.166
- type: ndcg_at_3
value: 85.648
- type: ndcg_at_5
value: 86.366
- type: precision_at_1
value: 82.013
- type: precision_at_10
value: 10.32
- type: precision_at_100
value: 1.093
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 32.408
- type: precision_at_5
value: 19.973
- type: recall_at_1
value: 76.059
- type: recall_at_10
value: 93.229
- type: recall_at_100
value: 96.387
- type: recall_at_1000
value: 97.916
- type: recall_at_3
value: 89.025
- type: recall_at_5
value: 90.96300000000001
- task:
type: Retrieval
dataset:
name: MTEB FiQA2018
type: mteb/fiqa
config: default
split: test
revision: 27a168819829fe9bcd655c2df245fb19452e8e06
metrics:
- type: map_at_1
value: 20.479
- type: map_at_10
value: 33.109
- type: map_at_100
value: 34.803
- type: map_at_1000
value: 35.003
- type: map_at_3
value: 28.967
- type: map_at_5
value: 31.385
- type: mrr_at_1
value: 40.278000000000006
- type: mrr_at_10
value: 48.929
- type: mrr_at_100
value: 49.655
- type: mrr_at_1000
value: 49.691
- type: mrr_at_3
value: 46.605000000000004
- type: mrr_at_5
value: 48.056
- type: ndcg_at_1
value: 40.278000000000006
- type: ndcg_at_10
value: 40.649
- type: ndcg_at_100
value: 47.027
- type: ndcg_at_1000
value: 50.249
- type: ndcg_at_3
value: 37.364000000000004
- type: ndcg_at_5
value: 38.494
- type: precision_at_1
value: 40.278000000000006
- type: precision_at_10
value: 11.327
- type: precision_at_100
value: 1.802
- type: precision_at_1000
value: 0.23700000000000002
- type: precision_at_3
value: 25.102999999999998
- type: precision_at_5
value: 18.457
- type: recall_at_1
value: 20.479
- type: recall_at_10
value: 46.594
- type: recall_at_100
value: 71.101
- type: recall_at_1000
value: 90.31099999999999
- type: recall_at_3
value: 33.378
- type: recall_at_5
value: 39.587
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA
type: mteb/hotpotqa
config: default
split: test
revision: ab518f4d6fcca38d87c25209f94beba119d02014
metrics:
- type: map_at_1
value: 36.59
- type: map_at_10
value: 58.178
- type: map_at_100
value: 59.095
- type: map_at_1000
value: 59.16400000000001
- type: map_at_3
value: 54.907
- type: map_at_5
value: 56.89999999999999
- type: mrr_at_1
value: 73.18
- type: mrr_at_10
value: 79.935
- type: mrr_at_100
value: 80.16799999999999
- type: mrr_at_1000
value: 80.17800000000001
- type: mrr_at_3
value: 78.776
- type: mrr_at_5
value: 79.522
- type: ndcg_at_1
value: 73.18
- type: ndcg_at_10
value: 66.538
- type: ndcg_at_100
value: 69.78
- type: ndcg_at_1000
value: 71.102
- type: ndcg_at_3
value: 61.739
- type: ndcg_at_5
value: 64.35600000000001
- type: precision_at_1
value: 73.18
- type: precision_at_10
value: 14.035
- type: precision_at_100
value: 1.657
- type: precision_at_1000
value: 0.183
- type: precision_at_3
value: 39.684999999999995
- type: precision_at_5
value: 25.885
- type: recall_at_1
value: 36.59
- type: recall_at_10
value: 70.176
- type: recall_at_100
value: 82.836
- type: recall_at_1000
value: 91.526
- type: recall_at_3
value: 59.526999999999994
- type: recall_at_5
value: 64.713
- task:
type: Classification
dataset:
name: MTEB ImdbClassification
type: mteb/imdb
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 90.1472
- type: ap
value: 85.73994227076815
- type: f1
value: 90.1271700788608
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO
type: mteb/msmarco
config: default
split: dev
revision: c5a29a104738b98a9e76336939199e264163d4a0
metrics:
- type: map_at_1
value: 21.689
- type: map_at_10
value: 33.518
- type: map_at_100
value: 34.715
- type: map_at_1000
value: 34.766000000000005
- type: map_at_3
value: 29.781000000000002
- type: map_at_5
value: 31.838
- type: mrr_at_1
value: 22.249
- type: mrr_at_10
value: 34.085
- type: mrr_at_100
value: 35.223
- type: mrr_at_1000
value: 35.266999999999996
- type: mrr_at_3
value: 30.398999999999997
- type: mrr_at_5
value: 32.437
- type: ndcg_at_1
value: 22.249
- type: ndcg_at_10
value: 40.227000000000004
- type: ndcg_at_100
value: 45.961999999999996
- type: ndcg_at_1000
value: 47.248000000000005
- type: ndcg_at_3
value: 32.566
- type: ndcg_at_5
value: 36.229
- type: precision_at_1
value: 22.249
- type: precision_at_10
value: 6.358
- type: precision_at_100
value: 0.923
- type: precision_at_1000
value: 0.10300000000000001
- type: precision_at_3
value: 13.83
- type: precision_at_5
value: 10.145999999999999
- type: recall_at_1
value: 21.689
- type: recall_at_10
value: 60.92999999999999
- type: recall_at_100
value: 87.40599999999999
- type: recall_at_1000
value: 97.283
- type: recall_at_3
value: 40.01
- type: recall_at_5
value: 48.776
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (en)
type: mteb/mtop_domain
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 95.28727770177838
- type: f1
value: 95.02577308660041
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (en)
type: mteb/mtop_intent
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 79.5736434108527
- type: f1
value: 61.2451202054398
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (en)
type: mteb/amazon_massive_intent
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 76.01210490921318
- type: f1
value: 73.70188053982473
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (en)
type: mteb/amazon_massive_scenario
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 79.33422999327504
- type: f1
value: 79.48369022509658
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringP2P
type: mteb/medrxiv-clustering-p2p
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 34.70891567267726
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringS2S
type: mteb/medrxiv-clustering-s2s
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 32.15203494451706
- task:
type: Reranking
dataset:
name: MTEB MindSmallReranking
type: mteb/mind_small
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 31.919517862194173
- type: mrr
value: 33.15466289140483
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus
type: mteb/nfcorpus
config: default
split: test
revision: ec0fa4fe99da2ff19ca1214b7966684033a58814
metrics:
- type: map_at_1
value: 5.992
- type: map_at_10
value: 13.197000000000001
- type: map_at_100
value: 16.907
- type: map_at_1000
value: 18.44
- type: map_at_3
value: 9.631
- type: map_at_5
value: 11.243
- type: mrr_at_1
value: 44.272
- type: mrr_at_10
value: 53.321
- type: mrr_at_100
value: 53.903
- type: mrr_at_1000
value: 53.952999999999996
- type: mrr_at_3
value: 51.393
- type: mrr_at_5
value: 52.708999999999996
- type: ndcg_at_1
value: 42.415000000000006
- type: ndcg_at_10
value: 34.921
- type: ndcg_at_100
value: 32.384
- type: ndcg_at_1000
value: 41.260000000000005
- type: ndcg_at_3
value: 40.186
- type: ndcg_at_5
value: 37.89
- type: precision_at_1
value: 44.272
- type: precision_at_10
value: 26.006
- type: precision_at_100
value: 8.44
- type: precision_at_1000
value: 2.136
- type: precision_at_3
value: 37.977
- type: precision_at_5
value: 32.755
- type: recall_at_1
value: 5.992
- type: recall_at_10
value: 17.01
- type: recall_at_100
value: 33.080999999999996
- type: recall_at_1000
value: 65.054
- type: recall_at_3
value: 10.528
- type: recall_at_5
value: 13.233
- task:
type: Retrieval
dataset:
name: MTEB NQ
type: mteb/nq
config: default
split: test
revision: b774495ed302d8c44a3a7ea25c90dbce03968f31
metrics:
- type: map_at_1
value: 28.871999999999996
- type: map_at_10
value: 43.286
- type: map_at_100
value: 44.432
- type: map_at_1000
value: 44.464999999999996
- type: map_at_3
value: 38.856
- type: map_at_5
value: 41.514
- type: mrr_at_1
value: 32.619
- type: mrr_at_10
value: 45.75
- type: mrr_at_100
value: 46.622
- type: mrr_at_1000
value: 46.646
- type: mrr_at_3
value: 41.985
- type: mrr_at_5
value: 44.277
- type: ndcg_at_1
value: 32.59
- type: ndcg_at_10
value: 50.895999999999994
- type: ndcg_at_100
value: 55.711999999999996
- type: ndcg_at_1000
value: 56.48800000000001
- type: ndcg_at_3
value: 42.504999999999995
- type: ndcg_at_5
value: 46.969
- type: precision_at_1
value: 32.59
- type: precision_at_10
value: 8.543000000000001
- type: precision_at_100
value: 1.123
- type: precision_at_1000
value: 0.12
- type: precision_at_3
value: 19.448
- type: precision_at_5
value: 14.218
- type: recall_at_1
value: 28.871999999999996
- type: recall_at_10
value: 71.748
- type: recall_at_100
value: 92.55499999999999
- type: recall_at_1000
value: 98.327
- type: recall_at_3
value: 49.944
- type: recall_at_5
value: 60.291
- task:
type: Retrieval
dataset:
name: MTEB QuoraRetrieval
type: mteb/quora
config: default
split: test
revision: e4e08e0b7dbe3c8700f0daef558ff32256715259
metrics:
- type: map_at_1
value: 70.664
- type: map_at_10
value: 84.681
- type: map_at_100
value: 85.289
- type: map_at_1000
value: 85.306
- type: map_at_3
value: 81.719
- type: map_at_5
value: 83.601
- type: mrr_at_1
value: 81.35
- type: mrr_at_10
value: 87.591
- type: mrr_at_100
value: 87.691
- type: mrr_at_1000
value: 87.693
- type: mrr_at_3
value: 86.675
- type: mrr_at_5
value: 87.29299999999999
- type: ndcg_at_1
value: 81.33
- type: ndcg_at_10
value: 88.411
- type: ndcg_at_100
value: 89.579
- type: ndcg_at_1000
value: 89.687
- type: ndcg_at_3
value: 85.613
- type: ndcg_at_5
value: 87.17
- type: precision_at_1
value: 81.33
- type: precision_at_10
value: 13.422
- type: precision_at_100
value: 1.5270000000000001
- type: precision_at_1000
value: 0.157
- type: precision_at_3
value: 37.463
- type: precision_at_5
value: 24.646
- type: recall_at_1
value: 70.664
- type: recall_at_10
value: 95.54
- type: recall_at_100
value: 99.496
- type: recall_at_1000
value: 99.978
- type: recall_at_3
value: 87.481
- type: recall_at_5
value: 91.88499999999999
- task:
type: Clustering
dataset:
name: MTEB RedditClustering
type: mteb/reddit-clustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 55.40341814991112
- task:
type: Clustering
dataset:
name: MTEB RedditClusteringP2P
type: mteb/reddit-clustering-p2p
config: default
split: test
revision: 385e3cb46b4cfa89021f56c4380204149d0efe33
metrics:
- type: v_measure
value: 61.231318481346655
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS
type: mteb/scidocs
config: default
split: test
revision: f8c2fcf00f625baaa80f62ec5bd9e1fff3b8ae88
metrics:
- type: map_at_1
value: 4.833
- type: map_at_10
value: 13.149
- type: map_at_100
value: 15.578
- type: map_at_1000
value: 15.963
- type: map_at_3
value: 9.269
- type: map_at_5
value: 11.182
- type: mrr_at_1
value: 23.9
- type: mrr_at_10
value: 35.978
- type: mrr_at_100
value: 37.076
- type: mrr_at_1000
value: 37.126
- type: mrr_at_3
value: 32.333
- type: mrr_at_5
value: 34.413
- type: ndcg_at_1
value: 23.9
- type: ndcg_at_10
value: 21.823
- type: ndcg_at_100
value: 30.833
- type: ndcg_at_1000
value: 36.991
- type: ndcg_at_3
value: 20.465
- type: ndcg_at_5
value: 17.965999999999998
- type: precision_at_1
value: 23.9
- type: precision_at_10
value: 11.49
- type: precision_at_100
value: 2.444
- type: precision_at_1000
value: 0.392
- type: precision_at_3
value: 19.3
- type: precision_at_5
value: 15.959999999999999
- type: recall_at_1
value: 4.833
- type: recall_at_10
value: 23.294999999999998
- type: recall_at_100
value: 49.63
- type: recall_at_1000
value: 79.49199999999999
- type: recall_at_3
value: 11.732
- type: recall_at_5
value: 16.167
- task:
type: STS
dataset:
name: MTEB SICK-R
type: mteb/sickr-sts
config: default
split: test
revision: 20a6d6f312dd54037fe07a32d58e5e168867909d
metrics:
- type: cos_sim_pearson
value: 85.62938108735759
- type: cos_sim_spearman
value: 80.30777094408789
- type: euclidean_pearson
value: 82.94516686659536
- type: euclidean_spearman
value: 80.34489663248169
- type: manhattan_pearson
value: 82.85830094736245
- type: manhattan_spearman
value: 80.24902623215449
- task:
type: STS
dataset:
name: MTEB STS12
type: mteb/sts12-sts
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 85.23777464247604
- type: cos_sim_spearman
value: 75.75714864112797
- type: euclidean_pearson
value: 82.33806918604493
- type: euclidean_spearman
value: 75.45282124387357
- type: manhattan_pearson
value: 82.32555620660538
- type: manhattan_spearman
value: 75.49228731684082
- task:
type: STS
dataset:
name: MTEB STS13
type: mteb/sts13-sts
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 84.88151620954451
- type: cos_sim_spearman
value: 86.08377598473446
- type: euclidean_pearson
value: 85.36958329369413
- type: euclidean_spearman
value: 86.10274219670679
- type: manhattan_pearson
value: 85.25873897594711
- type: manhattan_spearman
value: 85.98096461661584
- task:
type: STS
dataset:
name: MTEB STS14
type: mteb/sts14-sts
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 84.29360558735978
- type: cos_sim_spearman
value: 82.28284203795577
- type: euclidean_pearson
value: 83.81636655536633
- type: euclidean_spearman
value: 82.24340438530236
- type: manhattan_pearson
value: 83.83914453428608
- type: manhattan_spearman
value: 82.28391354080694
- task:
type: STS
dataset:
name: MTEB STS15
type: mteb/sts15-sts
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 87.47344180426744
- type: cos_sim_spearman
value: 88.90045649789438
- type: euclidean_pearson
value: 88.43020815961273
- type: euclidean_spearman
value: 89.0087449011776
- type: manhattan_pearson
value: 88.37601826505525
- type: manhattan_spearman
value: 88.96756360690617
- task:
type: STS
dataset:
name: MTEB STS16
type: mteb/sts16-sts
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 83.35997025304613
- type: cos_sim_spearman
value: 85.18237675717147
- type: euclidean_pearson
value: 84.46478196990202
- type: euclidean_spearman
value: 85.27748677712205
- type: manhattan_pearson
value: 84.29342543953123
- type: manhattan_spearman
value: 85.10579612516567
- task:
type: STS
dataset:
name: MTEB STS17 (en-en)
type: mteb/sts17-crosslingual-sts
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 88.56668329596836
- type: cos_sim_spearman
value: 88.72837234129177
- type: euclidean_pearson
value: 89.39395650897828
- type: euclidean_spearman
value: 88.82001247906778
- type: manhattan_pearson
value: 89.41735354368878
- type: manhattan_spearman
value: 88.95159141850039
- task:
type: STS
dataset:
name: MTEB STS22 (en)
type: mteb/sts22-crosslingual-sts
config: en
split: test
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
metrics:
- type: cos_sim_pearson
value: 67.466167902991
- type: cos_sim_spearman
value: 68.54466147197274
- type: euclidean_pearson
value: 69.35551179564695
- type: euclidean_spearman
value: 68.75455717749132
- type: manhattan_pearson
value: 69.42432368208264
- type: manhattan_spearman
value: 68.83203709670562
- task:
type: STS
dataset:
name: MTEB STSBenchmark
type: mteb/stsbenchmark-sts
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 85.33241300373689
- type: cos_sim_spearman
value: 86.97909372129874
- type: euclidean_pearson
value: 86.99526113559924
- type: euclidean_spearman
value: 87.02644372623219
- type: manhattan_pearson
value: 86.78744182759846
- type: manhattan_spearman
value: 86.8886180198196
- task:
type: Reranking
dataset:
name: MTEB SciDocsRR
type: mteb/scidocs-reranking
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 86.18374413668717
- type: mrr
value: 95.93213068703264
- task:
type: Retrieval
dataset:
name: MTEB SciFact
type: mteb/scifact
config: default
split: test
revision: 0228b52cf27578f30900b9e5271d331663a030d7
metrics:
- type: map_at_1
value: 58.31699999999999
- type: map_at_10
value: 67.691
- type: map_at_100
value: 68.201
- type: map_at_1000
value: 68.232
- type: map_at_3
value: 64.47800000000001
- type: map_at_5
value: 66.51
- type: mrr_at_1
value: 61.0
- type: mrr_at_10
value: 68.621
- type: mrr_at_100
value: 68.973
- type: mrr_at_1000
value: 69.002
- type: mrr_at_3
value: 66.111
- type: mrr_at_5
value: 67.578
- type: ndcg_at_1
value: 61.0
- type: ndcg_at_10
value: 72.219
- type: ndcg_at_100
value: 74.397
- type: ndcg_at_1000
value: 75.021
- type: ndcg_at_3
value: 66.747
- type: ndcg_at_5
value: 69.609
- type: precision_at_1
value: 61.0
- type: precision_at_10
value: 9.6
- type: precision_at_100
value: 1.08
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 25.667
- type: precision_at_5
value: 17.267
- type: recall_at_1
value: 58.31699999999999
- type: recall_at_10
value: 85.233
- type: recall_at_100
value: 95.167
- type: recall_at_1000
value: 99.667
- type: recall_at_3
value: 70.589
- type: recall_at_5
value: 77.628
- task:
type: PairClassification
dataset:
name: MTEB SprintDuplicateQuestions
type: mteb/sprintduplicatequestions-pairclassification
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.83267326732673
- type: cos_sim_ap
value: 96.13707107038228
- type: cos_sim_f1
value: 91.48830263812842
- type: cos_sim_precision
value: 91.0802775024777
- type: cos_sim_recall
value: 91.9
- type: dot_accuracy
value: 99.83069306930693
- type: dot_ap
value: 96.21199069147254
- type: dot_f1
value: 91.36295556665004
- type: dot_precision
value: 91.22632103688933
- type: dot_recall
value: 91.5
- type: euclidean_accuracy
value: 99.83267326732673
- type: euclidean_ap
value: 96.08957801367436
- type: euclidean_f1
value: 91.33004926108374
- type: euclidean_precision
value: 90.0
- type: euclidean_recall
value: 92.7
- type: manhattan_accuracy
value: 99.83564356435643
- type: manhattan_ap
value: 96.10534946461945
- type: manhattan_f1
value: 91.74950298210736
- type: manhattan_precision
value: 91.20553359683794
- type: manhattan_recall
value: 92.30000000000001
- type: max_accuracy
value: 99.83564356435643
- type: max_ap
value: 96.21199069147254
- type: max_f1
value: 91.74950298210736
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClustering
type: mteb/stackexchange-clustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 62.045718843534736
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClusteringP2P
type: mteb/stackexchange-clustering-p2p
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 36.6501777041092
- task:
type: Reranking
dataset:
name: MTEB StackOverflowDupQuestions
type: mteb/stackoverflowdupquestions-reranking
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 52.963913408053955
- type: mrr
value: 53.87972423818012
- task:
type: Summarization
dataset:
name: MTEB SummEval
type: mteb/summeval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 30.44195730764998
- type: cos_sim_spearman
value: 30.59626288679397
- type: dot_pearson
value: 30.22974492404086
- type: dot_spearman
value: 29.345245972906497
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID
type: mteb/trec-covid
config: default
split: test
revision: bb9466bac8153a0349341eb1b22e06409e78ef4e
metrics:
- type: map_at_1
value: 0.24
- type: map_at_10
value: 2.01
- type: map_at_100
value: 11.928999999999998
- type: map_at_1000
value: 29.034
- type: map_at_3
value: 0.679
- type: map_at_5
value: 1.064
- type: mrr_at_1
value: 92.0
- type: mrr_at_10
value: 96.0
- type: mrr_at_100
value: 96.0
- type: mrr_at_1000
value: 96.0
- type: mrr_at_3
value: 96.0
- type: mrr_at_5
value: 96.0
- type: ndcg_at_1
value: 87.0
- type: ndcg_at_10
value: 80.118
- type: ndcg_at_100
value: 60.753
- type: ndcg_at_1000
value: 54.632999999999996
- type: ndcg_at_3
value: 83.073
- type: ndcg_at_5
value: 80.733
- type: precision_at_1
value: 92.0
- type: precision_at_10
value: 84.8
- type: precision_at_100
value: 62.019999999999996
- type: precision_at_1000
value: 24.028
- type: precision_at_3
value: 87.333
- type: precision_at_5
value: 85.2
- type: recall_at_1
value: 0.24
- type: recall_at_10
value: 2.205
- type: recall_at_100
value: 15.068000000000001
- type: recall_at_1000
value: 51.796
- type: recall_at_3
value: 0.698
- type: recall_at_5
value: 1.1199999999999999
- task:
type: Retrieval
dataset:
name: MTEB Touche2020
type: mteb/touche2020
config: default
split: test
revision: a34f9a33db75fa0cbb21bb5cfc3dae8dc8bec93f
metrics:
- type: map_at_1
value: 3.066
- type: map_at_10
value: 9.219
- type: map_at_100
value: 15.387
- type: map_at_1000
value: 16.957
- type: map_at_3
value: 5.146
- type: map_at_5
value: 6.6739999999999995
- type: mrr_at_1
value: 40.816
- type: mrr_at_10
value: 50.844
- type: mrr_at_100
value: 51.664
- type: mrr_at_1000
value: 51.664
- type: mrr_at_3
value: 46.259
- type: mrr_at_5
value: 49.116
- type: ndcg_at_1
value: 37.755
- type: ndcg_at_10
value: 23.477
- type: ndcg_at_100
value: 36.268
- type: ndcg_at_1000
value: 47.946
- type: ndcg_at_3
value: 25.832
- type: ndcg_at_5
value: 24.235
- type: precision_at_1
value: 40.816
- type: precision_at_10
value: 20.204
- type: precision_at_100
value: 7.611999999999999
- type: precision_at_1000
value: 1.543
- type: precision_at_3
value: 25.169999999999998
- type: precision_at_5
value: 23.265
- type: recall_at_1
value: 3.066
- type: recall_at_10
value: 14.985999999999999
- type: recall_at_100
value: 47.902
- type: recall_at_1000
value: 83.56400000000001
- type: recall_at_3
value: 5.755
- type: recall_at_5
value: 8.741999999999999
- task:
type: Classification
dataset:
name: MTEB ToxicConversationsClassification
type: mteb/toxic_conversations_50k
config: default
split: test
revision: edfaf9da55d3dd50d43143d90c1ac476895ae6de
metrics:
- type: accuracy
value: 69.437
- type: ap
value: 12.844066827082706
- type: f1
value: 52.74974809872495
- task:
type: Classification
dataset:
name: MTEB TweetSentimentExtractionClassification
type: mteb/tweet_sentiment_extraction
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 61.26768534238823
- type: f1
value: 61.65100187399282
- task:
type: Clustering
dataset:
name: MTEB TwentyNewsgroupsClustering
type: mteb/twentynewsgroups-clustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 49.860968711078804
- task:
type: PairClassification
dataset:
name: MTEB TwitterSemEval2015
type: mteb/twittersemeval2015-pairclassification
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 85.7423854085951
- type: cos_sim_ap
value: 73.47560303339571
- type: cos_sim_f1
value: 67.372778183589
- type: cos_sim_precision
value: 62.54520795660036
- type: cos_sim_recall
value: 73.00791556728232
- type: dot_accuracy
value: 85.36091077069798
- type: dot_ap
value: 72.42521572307255
- type: dot_f1
value: 66.90576304724215
- type: dot_precision
value: 62.96554934823091
- type: dot_recall
value: 71.37203166226914
- type: euclidean_accuracy
value: 85.76026703224653
- type: euclidean_ap
value: 73.44852563860128
- type: euclidean_f1
value: 67.3
- type: euclidean_precision
value: 63.94299287410926
- type: euclidean_recall
value: 71.02902374670185
- type: manhattan_accuracy
value: 85.7423854085951
- type: manhattan_ap
value: 73.2635034755551
- type: manhattan_f1
value: 67.3180263800684
- type: manhattan_precision
value: 62.66484765802638
- type: manhattan_recall
value: 72.71767810026385
- type: max_accuracy
value: 85.76026703224653
- type: max_ap
value: 73.47560303339571
- type: max_f1
value: 67.372778183589
- task:
type: PairClassification
dataset:
name: MTEB TwitterURLCorpus
type: mteb/twitterurlcorpus-pairclassification
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 88.67543757519307
- type: cos_sim_ap
value: 85.35516518531304
- type: cos_sim_f1
value: 77.58197635511934
- type: cos_sim_precision
value: 75.01078360891445
- type: cos_sim_recall
value: 80.33569448721897
- type: dot_accuracy
value: 87.61400240617844
- type: dot_ap
value: 83.0774968268665
- type: dot_f1
value: 75.68229012162561
- type: dot_precision
value: 72.99713876967095
- type: dot_recall
value: 78.57252848783493
- type: euclidean_accuracy
value: 88.73753250281368
- type: euclidean_ap
value: 85.48043564821317
- type: euclidean_f1
value: 77.75975862719216
- type: euclidean_precision
value: 76.21054187920456
- type: euclidean_recall
value: 79.37326763166
- type: manhattan_accuracy
value: 88.75111576823068
- type: manhattan_ap
value: 85.44993439423668
- type: manhattan_f1
value: 77.6861329994845
- type: manhattan_precision
value: 74.44601270289344
- type: manhattan_recall
value: 81.22112719433323
- type: max_accuracy
value: 88.75111576823068
- type: max_ap
value: 85.48043564821317
- type: max_f1
value: 77.75975862719216
---
# chihlunLee/NoInstruct-small-Embedding-v0-Q4_0-GGUF
This model was converted to GGUF format from [`avsolatorio/NoInstruct-small-Embedding-v0`](https://huggingface.co/avsolatorio/NoInstruct-small-Embedding-v0) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/avsolatorio/NoInstruct-small-Embedding-v0) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo chihlunLee/NoInstruct-small-Embedding-v0-Q4_0-GGUF --hf-file noinstruct-small-embedding-v0-q4_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo chihlunLee/NoInstruct-small-Embedding-v0-Q4_0-GGUF --hf-file noinstruct-small-embedding-v0-q4_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo chihlunLee/NoInstruct-small-Embedding-v0-Q4_0-GGUF --hf-file noinstruct-small-embedding-v0-q4_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo chihlunLee/NoInstruct-small-Embedding-v0-Q4_0-GGUF --hf-file noinstruct-small-embedding-v0-q4_0.gguf -c 2048
```
|
gguichard/wsd_myriade_distil_adapter | gguichard | 2024-07-02T06:53:43Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"camembert",
"token-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-07-02T06:52:52Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Nishika26/codellama-sql-sft-merged | Nishika26 | 2024-07-02T07:04:32Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-07-02T06:54:21Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
QuantFactory/llama3-8B-DarkIdol-2.2-Uncensored-1048K-GGUF | QuantFactory | 2024-07-02T07:46:05Z | 0 | 0 | null | [
"gguf",
"region:us"
] | null | 2024-07-02T06:54:51Z | Entry not found |
baxtos/bartik08-3 | baxtos | 2024-07-02T06:59:03Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-07-02T06:56:40Z | Entry not found |
casque/0241_brown_fur_coat_v1 | casque | 2024-07-02T06:58:14Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-07-02T06:57:05Z | ---
license: creativeml-openrail-m
---
|
Danielrahmai1991/sentimentnewsModel_4bit | Danielrahmai1991 | 2024-07-02T07:01:01Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"base_model:sujet-ai/Sujet-Finance-8B-v0.1",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-07-02T06:57:13Z | ---
base_model: sujet-ai/Sujet-Finance-8B-v0.1
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
---
# Uploaded model
- **Developed by:** Danielrahmai1991
- **License:** apache-2.0
- **Finetuned from model :** sujet-ai/Sujet-Finance-8B-v0.1
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
QuantFactory/llama3-8B-DarkIdol-1.2-GGUF | QuantFactory | 2024-07-02T07:49:56Z | 0 | 0 | null | [
"gguf",
"region:us"
] | null | 2024-07-02T06:57:59Z | Entry not found |
Pranja/temp-llama-8b-unsloth-merged | Pranja | 2024-07-02T07:07:59Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-07-02T06:59:10Z | ---
base_model: unsloth/llama-3-8b-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** Pranja
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
drishanarora/cogito-v2-recipe-qwen2-7b-sft | drishanarora | 2024-07-03T01:27:16Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"qwen2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-07-02T06:59:27Z | Entry not found |
nidhistrive/whisper-small-hi | nidhistrive | 2024-07-02T06:59:39Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-02T06:59:39Z | Entry not found |
KasuleTrevor/wav2vec2-large-xls-r-300m-lg-cv-10hr-v3 | KasuleTrevor | 2024-07-02T09:04:28Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:facebook/wav2vec2-xls-r-300m",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-07-02T07:02:04Z | ---
license: apache-2.0
base_model: facebook/wav2vec2-xls-r-300m
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: wav2vec2-large-xls-r-300m-lg-cv-10hr-v3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/asr-africa-research-team/ASR%20Africa/runs/o0fw7ke6)
# wav2vec2-large-xls-r-300m-lg-cv-10hr-v3
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6399
- Wer: 0.5490
- Cer: 0.1258
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 500
- num_epochs: 100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-------:|:----:|:---------------:|:------:|:------:|
| No log | 0.9948 | 95 | 5.7544 | 1.0 | 1.0 |
| 11.4782 | 2.0 | 191 | 3.4141 | 1.0 | 1.0 |
| 3.8877 | 2.9948 | 286 | 2.9705 | 1.0 | 1.0 |
| 3.0666 | 4.0 | 382 | 2.8116 | 1.0 | 1.0 |
| 2.8721 | 4.9948 | 477 | 0.9460 | 0.9262 | 0.2276 |
| 1.6147 | 6.0 | 573 | 0.6163 | 0.8134 | 0.1855 |
| 0.6412 | 6.9948 | 668 | 0.4726 | 0.6816 | 0.1425 |
| 0.4424 | 8.0 | 764 | 0.4475 | 0.6449 | 0.1306 |
| 0.3408 | 8.9948 | 859 | 0.4403 | 0.6429 | 0.1310 |
| 0.2786 | 10.0 | 955 | 0.4409 | 0.6139 | 0.1252 |
| 0.24 | 10.9948 | 1050 | 0.4206 | 0.5878 | 0.1218 |
| 0.2111 | 12.0 | 1146 | 0.4501 | 0.5916 | 0.1194 |
| 0.1881 | 12.9948 | 1241 | 0.4514 | 0.5645 | 0.1140 |
| 0.1672 | 14.0 | 1337 | 0.4553 | 0.5761 | 0.1224 |
| 0.1532 | 14.9948 | 1432 | 0.4780 | 0.5764 | 0.1179 |
| 0.1421 | 16.0 | 1528 | 0.4795 | 0.5767 | 0.1177 |
| 0.1357 | 16.9948 | 1623 | 0.4573 | 0.5643 | 0.1189 |
| 0.1248 | 18.0 | 1719 | 0.4774 | 0.5679 | 0.1202 |
| 0.1176 | 18.9948 | 1814 | 0.5095 | 0.5659 | 0.1186 |
| 0.111 | 20.0 | 1910 | 0.4775 | 0.5562 | 0.1138 |
| 0.1093 | 20.9948 | 2005 | 0.5052 | 0.5465 | 0.1115 |
| 0.1017 | 22.0 | 2101 | 0.5074 | 0.5464 | 0.1123 |
| 0.1017 | 22.9948 | 2196 | 0.5003 | 0.5419 | 0.1135 |
| 0.0965 | 24.0 | 2292 | 0.5247 | 0.5420 | 0.1130 |
| 0.0947 | 24.9948 | 2387 | 0.5224 | 0.5474 | 0.1152 |
| 0.0903 | 26.0 | 2483 | 0.5124 | 0.5250 | 0.1089 |
| 0.0865 | 26.9948 | 2578 | 0.5339 | 0.5387 | 0.1100 |
| 0.0837 | 28.0 | 2674 | 0.5362 | 0.5340 | 0.1128 |
| 0.0836 | 28.9948 | 2769 | 0.5354 | 0.5276 | 0.1095 |
| 0.0773 | 30.0 | 2865 | 0.5512 | 0.5352 | 0.1101 |
| 0.075 | 30.9948 | 2960 | 0.5162 | 0.5102 | 0.1058 |
| 0.0723 | 32.0 | 3056 | 0.5296 | 0.5236 | 0.1057 |
| 0.0764 | 32.9948 | 3151 | 0.5447 | 0.5289 | 0.1083 |
| 0.0706 | 34.0 | 3247 | 0.5291 | 0.5355 | 0.1138 |
| 0.0694 | 34.9948 | 3342 | 0.5314 | 0.5244 | 0.1116 |
| 0.0679 | 36.0 | 3438 | 0.5199 | 0.5215 | 0.1135 |
| 0.0645 | 36.9948 | 3533 | 0.5555 | 0.5244 | 0.1118 |
| 0.0623 | 38.0 | 3629 | 0.5392 | 0.5266 | 0.1141 |
| 0.0622 | 38.9948 | 3724 | 0.5500 | 0.5248 | 0.1125 |
| 0.06 | 40.0 | 3820 | 0.5467 | 0.5197 | 0.1121 |
| 0.0598 | 40.9948 | 3915 | 0.5405 | 0.5161 | 0.1120 |
### Framework versions
- Transformers 4.42.3
- Pytorch 2.2.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
KYUNGHYUN9/itos_v0.004_1.3b-1000step_longdata | KYUNGHYUN9 | 2024-07-02T07:02:31Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-07-02T07:02:27Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ShapeKapseln33/Hondrostrong5 | ShapeKapseln33 | 2024-07-02T07:03:43Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-02T07:02:31Z | Hondrostrong France Commentaires Hondrostrong Forte Cream 100ml est une crème topique qui aide à soulager l'inflammation des articulations. Elle est indiquée pour prévenir les affections articulaires et convient particulièrement aux personnes qui mènent une vie active et aux athlètes. Contient 100ml.
Cliquez ici pour acheter maintenant sur le site officiel de Hondrostrong
Où acheter Hondrostrong
En France, il existe divers canaux de distribution pour les suppléments alimentaires. Populaires parmi eux sont les pharmacies et les sites Web tels que Amazon. Toutefois, une solution recommandée est l’achat en ligne. Acheter en ligne offre plusieurs avantages comme la commodité de commander de chez soi, la possibilité de comparer les prix facilement, et souvent de profiter des promotions exclusives.
Hondrostrong ne peut être acheté que sur le site officiel du fabricant, ce qui garantit que vous recevez le produit original et non une contrefaçon. Cette exclusivité empêche toute confusion qui pourrait survenir si Hondrostrong était disponible en pharmacie ou sur Amazon. De plus, acheter directement sur le site officiel vous assure la fraîcheur du produit et engage des économies de coûts, souvent transférées sous forme de promotions et remises pour les consommateurs. Pour passer votre achat en France, rendez-vous simplement sur le site du fabricant, où vous trouverez des offres spéciales disponibles localement.
Composition et Effet des Ingrédients
Introduction à la Composition du Produit
Hondrostrong se distingue par sa composition unique, issue d’ingrédients naturels soigneusement sélectionnés. Provenant essentiellement de la culture traditionnelle Maori, les ingrédients de ce supplément offrent des avantages divers et complémentaires pour les articulations et les tissus conjonctifs.
Les Ingrédients de Hondrostrong
Extrait de moules vertes : Ces mollusques sont riches en mucopolysaccharides, bénéfiques pour le soutien et la régénération des tissus conjonctifs.
Apitoxine : Connue pour ses propriétés anti-inflammatoires et analgésiques, elle aide à réduire la douleur et à réparer le cartilage.
Jus de feuille de houx : Cet ingrédient a des effets anti-inflammatoires et antimicrobiens, offrant un support essentiel contre l’arthrite.
Extrait de cardamome : Il contribue à améliorer la circulation sanguine et à soulager les douleurs musculaires.
Huile d’amarante : Riche en acides gras et antioxydants, elle nourrit et protège les tissus articulaires.
Il est intéressant de noter qu’une étude souligne que 9 adultes sur 10, utilisant cette combinaison d’ingrédients, ont observé des améliorations notables dans leur mobilité et confort articulaire.
Cliquez ici pour acheter maintenant sur le site officiel de Hondrostrong
Mode d’Emploi du Produit
Instructions sur l’Utilisation
Pour obtenir les meilleurs résultats, voici comment utiliser Hondrostrong selon le mode d’emploi recommandé :
Application : Appliquez une petite quantité de crème sur la zone affectée.
Massage : Massez délicatement jusqu’à absorption complète, deux fois par jour.
Durée : Utilisez régulièrement pendant 1,5 à 2 mois pour des effets optimaux.
Adaptabilité aux Besoins des Utilisateurs
Personnes âgées : Les utilisateurs de plus de 65 ans peuvent bénéficier d’une application supplémentaire, en prenant soin de surveiller toute réaction.
Utilisation intensive : Les personnes souffrant de douleurs articulaires chroniques peuvent augmenter légèrement la quantité appliquée, suivant les recommandations.
Effets et Leurs Caractéristiques
Introduction aux Effets Positifs
Hondrostrong est réputé pour ses effets bénéfiques sur les articulations et les tissus conjonctifs. Grâce à sa composition unique, ce produit améliore significativement la qualité de vie des utilisateurs en réduisant les douleurs et en augmentant la mobilité. L’application régulière de Hondrostrong permet de ressentir des bienfaits notables au fil du temps.
Effets du Produit
Ce supplément alimentaire commence à agir dès les premières applications. Vous pouvez constater une diminution progressive de la douleur articulaire et une réduction de l’inflammation. La crème favorise également la régénération du cartilage, optimisant ainsi la flexibilité et la mobilité. Avec ses propriétés anti-inflammatoires et analgésiques, Hondrostrong offre un soulagement durable, permettant de reprendre des activités quotidiennes plus confortablement. Les utilisateurs ont rapporté une amélioration significative après 1,5 à 2 mois d’utilisation régulière. En fait, une étude récente a montré que 87% des utilisateurs constatent une augmentation de leur mobilité et un confort accru après cette période.
Contre-Indications et Effets Secondaires
Absence d’Effets Secondaires
Hondrostrong est formulé pour être sûr et bien toléré par la plupart des personnes, grâce à ses ingrédients naturels soigneusement sélectionnés. Ce produit n’a pas d’effets secondaires connus puisqu’il n’utilise aucun produit chimique nocif. Ses composants naturels, comme l’extrait de moules vertes, l’apitoxine, et le jus de feuille de houx, sont adaptés pour soutenir la santé articulaire sans causer d’inconfort.
Contre-Indications
Il est essentiel de vérifier attentivement la liste des ingrédients avant d’utiliser Hondrostrong. Les personnes ayant des allergies à l’un des composants doivent éviter ce produit pour prévenir toute réaction indésirable. En présence d’allergies connues, consultez la composition du produit sur le site officiel du fabricant pour vous assurer qu’il est sans risque pour vous. Cette précaution simple garantit une utilisation en toute sécurité et l’optimisation de ses bienfaits naturels sans aucune réaction défavorable.
Cliquez ici pour acheter maintenant sur le site officiel de Hondrostrong
|
JayYH/whisper-medium-ko | JayYH | 2024-07-02T07:17:28Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"ko",
"dataset:mozilla-foundation/common_voice_17_0",
"base_model:openai/whisper-medium",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-07-02T07:02:42Z | ---
language:
- ko
license: apache-2.0
base_model: openai/whisper-medium
tags:
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_17_0
model-index:
- name: Whisper Korean - whisper-medium
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Korean - whisper-medium
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the Common Voice 17.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3709
- Cer: 8.6970
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 1000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cer |
|:-------------:|:-------:|:----:|:---------------:|:------:|
| 0.0045 | 11.1111 | 500 | 0.3555 | 8.8374 |
| 0.0006 | 22.2222 | 1000 | 0.3709 | 8.6970 |
### Framework versions
- Transformers 4.42.3
- Pytorch 2.3.1+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
SidXXD/person | SidXXD | 2024-07-02T08:30:13Z | 0 | 0 | diffusers | [
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"custom-diffusion",
"base_model:stabilityai/stable-diffusion-2-1-base",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2024-07-02T07:02:58Z |
---
license: creativeml-openrail-m
base_model: stabilityai/stable-diffusion-2-1-base
instance_prompt: photo of a <v1*> person
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- custom-diffusion
inference: true
---
# Custom Diffusion - SidXXD/person
These are Custom Diffusion adaption weights for stabilityai/stable-diffusion-2-1-base. The weights were trained on photo of a <v1*> person using [Custom Diffusion](https://www.cs.cmu.edu/~custom-diffusion). You can find some example images in the following.
For more details on the training, please follow [this link](https://github.com/huggingface/diffusers/blob/main/examples/custom_diffusion).
|
Yuki20/llama3_8b_sql3 | Yuki20 | 2024-07-02T07:03:24Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-07-02T07:03:17Z | ---
base_model: unsloth/llama-3-8b-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** Yuki20
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Human23/ThePhotographer | Human23 | 2024-07-02T07:04:07Z | 0 | 0 | null | [
"license:cc-by-sa-4.0",
"region:us"
] | null | 2024-07-02T07:04:07Z | ---
license: cc-by-sa-4.0
---
|
Fishychick/Translation | Fishychick | 2024-07-02T07:04:33Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-02T07:04:33Z | Entry not found |
jeromesky/pronunciation_accuracy_v1.0.1 | jeromesky | 2024-07-02T07:41:58Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"audio-classification",
"endpoints_compatible",
"region:us"
] | audio-classification | 2024-07-02T07:06:50Z | Entry not found |
zhey666/quantized | zhey666 | 2024-07-02T07:38:25Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-02T07:08:17Z | Entry not found |
Pranja/temp-llama-8b-unsloth | Pranja | 2024-07-02T07:08:45Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-07-02T07:08:17Z | ---
base_model: unsloth/llama-3-8b-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** Pranja
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
nicoson/research | nicoson | 2024-07-02T07:08:25Z | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | 2024-07-02T07:08:25Z | ---
license: mit
---
|
slone/nllb-206-v2-ct2-int8 | slone | 2024-07-02T07:38:33Z | 0 | 0 | transformers | [
"transformers",
"endpoints_compatible",
"region:us"
] | null | 2024-07-02T07:08:58Z | Entry not found |
raymondcty/hoiks_dev_a | raymondcty | 2024-07-02T08:44:23Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"pytorch",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2024-07-02T07:10:18Z | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: hoiks_dev_a
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.6222222447395325
---
# hoiks_dev_a
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### female

#### male
 |
SidXXD/dog | SidXXD | 2024-07-02T08:30:59Z | 0 | 0 | diffusers | [
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"custom-diffusion",
"base_model:stabilityai/stable-diffusion-2-1-base",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2024-07-02T07:11:32Z |
---
license: creativeml-openrail-m
base_model: stabilityai/stable-diffusion-2-1-base
instance_prompt: photo of a <v1*> dog
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- custom-diffusion
inference: true
---
# Custom Diffusion - SidXXD/dog
These are Custom Diffusion adaption weights for stabilityai/stable-diffusion-2-1-base. The weights were trained on photo of a <v1*> dog using [Custom Diffusion](https://www.cs.cmu.edu/~custom-diffusion). You can find some example images in the following.
For more details on the training, please follow [this link](https://github.com/huggingface/diffusers/blob/main/examples/custom_diffusion).
|
oleshy/ontochem_biobert_half | oleshy | 2024-07-02T08:43:46Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"base_model:dmis-lab/biobert-base-cased-v1.1",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-07-02T07:12:15Z | ---
base_model: dmis-lab/biobert-base-cased-v1.1
tags:
- generated_from_trainer
model-index:
- name: ontochem_biobert_half
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ontochem_biobert_half
This model is a fine-tuned version of [dmis-lab/biobert-base-cased-v1.1](https://huggingface.co/dmis-lab/biobert-base-cased-v1.1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0778
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 14
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 24 | 0.9696 |
| No log | 2.0 | 48 | 0.8620 |
| No log | 3.0 | 72 | 0.6842 |
| No log | 4.0 | 96 | 0.4193 |
| No log | 5.0 | 120 | 0.1765 |
| No log | 6.0 | 144 | 0.1210 |
| No log | 7.0 | 168 | 0.0996 |
| No log | 8.0 | 192 | 0.0849 |
| No log | 9.0 | 216 | 0.0770 |
| No log | 10.0 | 240 | 0.0739 |
| No log | 11.0 | 264 | 0.0739 |
| No log | 12.0 | 288 | 0.0731 |
| No log | 13.0 | 312 | 0.0751 |
| No log | 14.0 | 336 | 0.0778 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
Hemanth1729/SentimentAnalysis_modelv1 | Hemanth1729 | 2024-07-02T07:13:06Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-07-02T07:13:02Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Manasa312/manasa-stocks-gpt | Manasa312 | 2024-07-02T07:50:20Z | 0 | 0 | null | [
"dataset:paperswithbacktest/Stocks-Daily-Price",
"dataset:destinybound/NSE-stock-market-historical-data",
"region:us"
] | null | 2024-07-02T07:13:32Z | ---
datasets:
- paperswithbacktest/Stocks-Daily-Price
- destinybound/NSE-stock-market-historical-data
--- |
ClementineBleuze/deberta_prefix_SEP | ClementineBleuze | 2024-07-02T08:59:37Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"base_model:microsoft/deberta-v3-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-07-02T07:13:36Z | ---
license: mit
base_model: microsoft/deberta-v3-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: deberta_prefix_SEP
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta_prefix_SEP
This model is a fine-tuned version of [microsoft/deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1280
- F1 Weighted: 0.8462
- F1 Samples: 0.8514
- F1 Macro: 0.6929
- F1 Micro: 0.8519
- Accuracy: 0.8227
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Weighted | F1 Samples | F1 Macro | F1 Micro | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:-----------:|:----------:|:--------:|:--------:|:--------:|
| 0.2898 | 0.3381 | 500 | 0.1976 | 0.6319 | 0.6429 | 0.3095 | 0.6811 | 0.6286 |
| 0.1842 | 0.6761 | 1000 | 0.1556 | 0.7398 | 0.7600 | 0.3935 | 0.7804 | 0.7422 |
| 0.1567 | 1.0142 | 1500 | 0.1433 | 0.7572 | 0.7845 | 0.4055 | 0.7974 | 0.7652 |
| 0.133 | 1.3523 | 2000 | 0.1308 | 0.8164 | 0.8213 | 0.6479 | 0.8290 | 0.7984 |
| 0.1277 | 1.6903 | 2500 | 0.1295 | 0.8061 | 0.8190 | 0.6039 | 0.8260 | 0.7943 |
| 0.1234 | 2.0284 | 3000 | 0.1283 | 0.8245 | 0.8267 | 0.6714 | 0.8272 | 0.7903 |
| 0.0993 | 2.3665 | 3500 | 0.1253 | 0.8438 | 0.8499 | 0.6938 | 0.8509 | 0.8221 |
| 0.1035 | 2.7045 | 4000 | 0.1371 | 0.8220 | 0.8276 | 0.6619 | 0.8290 | 0.8004 |
| 0.1036 | 3.0426 | 4500 | 0.1280 | 0.8462 | 0.8514 | 0.6929 | 0.8519 | 0.8227 |
| 0.085 | 3.3807 | 5000 | 0.1298 | 0.8403 | 0.8498 | 0.6907 | 0.8489 | 0.8214 |
| 0.0838 | 3.7187 | 5500 | 0.1337 | 0.8294 | 0.8330 | 0.6689 | 0.8321 | 0.7997 |
| 0.0849 | 4.0568 | 6000 | 0.1208 | 0.8451 | 0.8495 | 0.6913 | 0.8501 | 0.8166 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.1+cu121
- Datasets 2.19.2
- Tokenizers 0.19.1
|
cortexso/claude-3-opus-20240229 | cortexso | 2024-07-02T07:29:27Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-02T07:16:02Z | Entry not found |
maxseats/SungBeom-whisper-small-ko-set14 | maxseats | 2024-07-02T07:17:03Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"speech-recognition",
"ko",
"dataset:maxseats/aihub-464-preprocessed-680GB-set-14",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-07-02T07:16:41Z |
---
language: ko
tags:
- whisper
- speech-recognition
datasets:
- maxseats/aihub-464-preprocessed-680GB-set-14
metrics:
- cer
---
# Model Name : maxseats/SungBeom-whisper-small-ko-set13
# Description
- 파인튜닝 데이터셋 : maxseats/aihub-464-preprocessed-680GB-set-14
# 설명
- AI hub의 주요 영역별 회의 음성 데이터셋을 학습 중이에요.
- 680GB 중 set_0~13 데이터(140GB)까지 파인튜닝한 모델을 불러와서, set_14 데이터(10GB)를 학습한 모델입니다.
- 링크 : https://huggingface.co/datasets/maxseats/aihub-464-preprocessed-680GB-set-14
|
Tony3097/Sreekanth | Tony3097 | 2024-07-02T07:17:36Z | 0 | 0 | null | [
"en",
"license:mit",
"region:us"
] | null | 2024-07-02T07:16:51Z | ---
license: mit
language:
- en
--- |
zhangfaen/Florence-2-large-ft | zhangfaen | 2024-06-22T18:09:39Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"florence2",
"text-generation",
"vision",
"image-to-text",
"custom_code",
"arxiv:2311.06242",
"license:mit",
"autotrain_compatible",
"region:us"
] | image-to-text | 2024-07-02T07:17:46Z | ---
license: mit
license_link: https://huggingface.co/microsoft/Florence-2-large-ft/resolve/main/LICENSE
pipeline_tag: image-to-text
tags:
- vision
---
# Florence-2: Advancing a Unified Representation for a Variety of Vision Tasks
## Model Summary
This is a copy of Microsoft's model with a few fixes. The PRs for the fixes are open on the original model but until they merge I'm using this one to have everything set up correctly.
This Hub repository contains a HuggingFace's `transformers` implementation of Florence-2 model from Microsoft.
Florence-2 is an advanced vision foundation model that uses a prompt-based approach to handle a wide range of vision and vision-language tasks. Florence-2 can interpret simple text prompts to perform tasks like captioning, object detection, and segmentation. It leverages our FLD-5B dataset, containing 5.4 billion annotations across 126 million images, to master multi-task learning. The model's sequence-to-sequence architecture enables it to excel in both zero-shot and fine-tuned settings, proving to be a competitive vision foundation model.
Resources and Technical Documentation:
+ [Florence-2 technical report](https://arxiv.org/abs/2311.06242).
+ [Jupyter Notebook for inference and visualization of Florence-2-large model](https://huggingface.co/microsoft/Florence-2-large/blob/main/sample_inference.ipynb)
| Model | Model size | Model Description |
| ------- | ------------- | ------------- |
| Florence-2-base[[HF]](https://huggingface.co/microsoft/Florence-2-base) | 0.23B | Pretrained model with FLD-5B
| Florence-2-large[[HF]](https://huggingface.co/microsoft/Florence-2-large) | 0.77B | Pretrained model with FLD-5B
| Florence-2-base-ft[[HF]](https://huggingface.co/microsoft/Florence-2-base-ft) | 0.23B | Finetuned model on a colletion of downstream tasks
| Florence-2-large-ft[[HF]](https://huggingface.co/microsoft/Florence-2-large-ft) | 0.77B | Finetuned model on a colletion of downstream tasks
## How to Get Started with the Model
Use the code below to get started with the model.
```python
import requests
from PIL import Image
from transformers import AutoProcessor, AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("microsoft/Florence-2-large-ft", trust_remote_code=True)
processor = AutoProcessor.from_pretrained("microsoft/Florence-2-large-ft", trust_remote_code=True)
prompt = "<OD>"
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg?download=true"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(text=prompt, images=image, return_tensors="pt")
generated_ids = model.generate(
input_ids=inputs["input_ids"],
pixel_values=inputs["pixel_values"],
max_new_tokens=1024,
do_sample=False,
num_beams=3
)
generated_text = processor.batch_decode(generated_ids, skip_special_tokens=False)[0]
parsed_answer = processor.post_process_generation(generated_text, task="<OD>", image_size=(image.width, image.height))
print(parsed_answer)
```
## Tasks
This model is capable of performing different tasks through changing the prompts.
First, let's define a function to run a prompt.
<details>
<summary> Click to expand </summary>
```python
import requests
from PIL import Image
from transformers import AutoProcessor, AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("microsoft/Florence-2-large-ft", trust_remote_code=True)
processor = AutoProcessor.from_pretrained("microsoft/Florence-2-large-ft", trust_remote_code=True)
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg?download=true"
image = Image.open(requests.get(url, stream=True).raw)
def run_example(task_prompt, text_input=None):
if text_input is None:
prompt = task_prompt
else:
prompt = task_prompt + text_input
inputs = processor(text=prompt, images=image, return_tensors="pt")
generated_ids = model.generate(
input_ids=inputs["input_ids"],
pixel_values=inputs["pixel_values"],
max_new_tokens=1024,
num_beams=3
)
generated_text = processor.batch_decode(generated_ids, skip_special_tokens=False)[0]
parsed_answer = processor.post_process_generation(generated_text, task=task_prompt, image_size=(image.width, image.height))
print(parsed_answer)
```
</details>
Here are the tasks `Florence-2` could perform:
<details>
<summary> Click to expand </summary>
### Caption
```python
prompt = "<CAPTION>"
run_example(prompt)
```
### Detailed Caption
```python
prompt = "<DETAILED_CAPTION>"
run_example(prompt)
```
### More Detailed Caption
```python
prompt = "<MORE_DETAILED_CAPTION>"
run_example(prompt)
```
### Caption to Phrase Grounding
caption to phrase grounding task requires additional text input, i.e. caption.
Caption to phrase grounding results format:
{'\<CAPTION_TO_PHRASE_GROUNDING>': {'bboxes': [[x1, y1, x2, y2], ...], 'labels': ['', '', ...]}}
```python
task_prompt = "<CAPTION_TO_PHRASE_GROUNDING>"
results = run_example(task_prompt, text_input="A green car parked in front of a yellow building.")
```
### Object Detection
OD results format:
{'\<OD>': {'bboxes': [[x1, y1, x2, y2], ...],
'labels': ['label1', 'label2', ...]} }
```python
prompt = "<OD>"
run_example(prompt)
```
### Dense Region Caption
Dense region caption results format:
{'\<DENSE_REGION_CAPTION>' : {'bboxes': [[x1, y1, x2, y2], ...],
'labels': ['label1', 'label2', ...]} }
```python
prompt = "<DENSE_REGION_CAPTION>"
run_example(prompt)
```
### Region proposal
Dense region caption results format:
{'\<REGION_PROPOSAL>': {'bboxes': [[x1, y1, x2, y2], ...],
'labels': ['', '', ...]}}
```python
prompt = "<REGION_PROPOSAL>"
run_example(prompt)
```
### OCR
```python
prompt = "<OCR>"
run_example(prompt)
```
### OCR with Region
OCR with region output format:
{'\<OCR_WITH_REGION>': {'quad_boxes': [[x1, y1, x2, y2, x3, y3, x4, y4], ...], 'labels': ['text1', ...]}}
```python
prompt = "<OCR_WITH_REGION>"
run_example(prompt)
```
for More detailed examples, please refer to [notebook](https://huggingface.co/microsoft/Florence-2-large/blob/main/sample_inference.ipynb)
</details>
# Benchmarks
## Florence-2 Zero-shot performance
The following table presents the zero-shot performance of generalist vision foundation models on image captioning and object detection evaluation tasks. These models have not been exposed to the training data of the evaluation tasks during their training phase.
| Method | #params | COCO Cap. test CIDEr | NoCaps val CIDEr | TextCaps val CIDEr | COCO Det. val2017 mAP |
|--------|---------|----------------------|------------------|--------------------|-----------------------|
| Flamingo | 80B | 84.3 | - | - | - |
| Florence-2-base| 0.23B | 133.0 | 118.7 | 70.1 | 34.7 |
| Florence-2-large| 0.77B | 135.6 | 120.8 | 72.8 | 37.5 |
The following table continues the comparison with performance on other vision-language evaluation tasks.
| Method | Flickr30k test R@1 | Refcoco val Accuracy | Refcoco test-A Accuracy | Refcoco test-B Accuracy | Refcoco+ val Accuracy | Refcoco+ test-A Accuracy | Refcoco+ test-B Accuracy | Refcocog val Accuracy | Refcocog test Accuracy | Refcoco RES val mIoU |
|--------|----------------------|----------------------|-------------------------|-------------------------|-----------------------|--------------------------|--------------------------|-----------------------|------------------------|----------------------|
| Kosmos-2 | 78.7 | 52.3 | 57.4 | 47.3 | 45.5 | 50.7 | 42.2 | 60.6 | 61.7 | - |
| Florence-2-base | 83.6 | 53.9 | 58.4 | 49.7 | 51.5 | 56.4 | 47.9 | 66.3 | 65.1 | 34.6 |
| Florence-2-large | 84.4 | 56.3 | 61.6 | 51.4 | 53.6 | 57.9 | 49.9 | 68.0 | 67.0 | 35.8 |
## Florence-2 finetuned performance
We finetune Florence-2 models with a collection of downstream tasks, resulting two generalist models *Florence-2-base-ft* and *Florence-2-large-ft* that can conduct a wide range of downstream tasks.
The table below compares the performance of specialist and generalist models on various captioning and Visual Question Answering (VQA) tasks. Specialist models are fine-tuned specifically for each task, whereas generalist models are fine-tuned in a task-agnostic manner across all tasks. The symbol "▲" indicates the usage of external OCR as input.
| Method | # Params | COCO Caption Karpathy test CIDEr | NoCaps val CIDEr | TextCaps val CIDEr | VQAv2 test-dev Acc | TextVQA test-dev Acc | VizWiz VQA test-dev Acc |
|----------------|----------|-----------------------------------|------------------|--------------------|--------------------|----------------------|-------------------------|
| **Specialist Models** | | | | | | | |
| CoCa | 2.1B | 143.6 | 122.4 | - | 82.3 | - | - |
| BLIP-2 | 7.8B | 144.5 | 121.6 | - | 82.2 | - | - |
| GIT2 | 5.1B | 145.0 | 126.9 | 148.6 | 81.7 | 67.3 | 71.0 |
| Flamingo | 80B | 138.1 | - | - | 82.0 | 54.1 | 65.7 |
| PaLI | 17B | 149.1 | 127.0 | 160.0▲ | 84.3 | 58.8 / 73.1▲ | 71.6 / 74.4▲ |
| PaLI-X | 55B | 149.2 | 126.3 | 147.0 / 163.7▲ | 86.0 | 71.4 / 80.8▲ | 70.9 / 74.6▲ |
| **Generalist Models** | | | | | | | |
| Unified-IO | 2.9B | - | 100.0 | - | 77.9 | - | 57.4 |
| Florence-2-base-ft | 0.23B | 140.0 | 116.7 | 143.9 | 79.7 | 63.6 | 63.6 |
| Florence-2-large-ft | 0.77B | 143.3 | 124.9 | 151.1 | 81.7 | 73.5 | 72.6 |
| Method | # Params | COCO Det. val2017 mAP | Flickr30k test R@1 | RefCOCO val Accuracy | RefCOCO test-A Accuracy | RefCOCO test-B Accuracy | RefCOCO+ val Accuracy | RefCOCO+ test-A Accuracy | RefCOCO+ test-B Accuracy | RefCOCOg val Accuracy | RefCOCOg test Accuracy | RefCOCO RES val mIoU |
|----------------------|----------|-----------------------|--------------------|----------------------|-------------------------|-------------------------|------------------------|---------------------------|---------------------------|------------------------|-----------------------|------------------------|
| **Specialist Models** | | | | | | | | | | | | |
| SeqTR | - | - | - | 83.7 | 86.5 | 81.2 | 71.5 | 76.3 | 64.9 | 74.9 | 74.2 | - |
| PolyFormer | - | - | - | 90.4 | 92.9 | 87.2 | 85.0 | 89.8 | 78.0 | 85.8 | 85.9 | 76.9 |
| UNINEXT | 0.74B | 60.6 | - | 92.6 | 94.3 | 91.5 | 85.2 | 89.6 | 79.8 | 88.7 | 89.4 | - |
| Ferret | 13B | - | - | 89.5 | 92.4 | 84.4 | 82.8 | 88.1 | 75.2 | 85.8 | 86.3 | - |
| **Generalist Models** | | | | | | | | | | | | |
| UniTAB | - | - | - | 88.6 | 91.1 | 83.8 | 81.0 | 85.4 | 71.6 | 84.6 | 84.7 | - |
| Florence-2-base-ft | 0.23B | 41.4 | 84.0 | 92.6 | 94.8 | 91.5 | 86.8 | 91.7 | 82.2 | 89.8 | 82.2 | 78.0 |
| Florence-2-large-ft| 0.77B | 43.4 | 85.2 | 93.4 | 95.3 | 92.0 | 88.3 | 92.9 | 83.6 | 91.2 | 91.7 | 80.5 |
## BibTex and citation info
```
@article{xiao2023florence,
title={Florence-2: Advancing a unified representation for a variety of vision tasks},
author={Xiao, Bin and Wu, Haiping and Xu, Weijian and Dai, Xiyang and Hu, Houdong and Lu, Yumao and Zeng, Michael and Liu, Ce and Yuan, Lu},
journal={arXiv preprint arXiv:2311.06242},
year={2023}
}
``` |
sara-m98/ECO_BETO_UNCASED_1 | sara-m98 | 2024-07-02T12:00:35Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-07-02T07:18:03Z | dccuchile/bert-base-spanish-wwm-uncased
training_args = TrainingArguments(
output_dir='ECO_DEBERTA',
evaluation_strategy="epoch",
learning_rate=2e-5,
per_device_train_batch_size=16,
per_device_eval_batch_size=16,
num_train_epochs=32,
weight_decay=0.01,
save_strategy="epoch",
load_best_model_at_end=True,
push_to_hub=True
)
Epoch Training Loss Validation Loss Precision Recall F1 Accuracy
1 No log 0.059145 0.400593 0.361543 0.380068 0.985344
2 0.101700 0.049539 0.432272 0.429031 0.430645 0.987116
3 0.101700 0.053214 0.441069 0.486074 0.462479 0.987581
4 0.018500 0.061344 0.417263 0.484199 0.448246 0.986019
5 0.018500 0.075176 0.444500 0.476165 0.459788 0.986241
6 0.007600 0.072537 0.498946 0.507231 0.503054 0.987182
7 0.007600 0.077320 0.470782 0.498393 0.484194 0.987042
8 0.003900 0.081492 0.436262 0.501339 0.466542 0.986602
9 0.003900 0.086251 0.487153 0.512855 0.499674 0.987172
10 0.002600 0.077991 0.501917 0.525978 0.513665 0.987661
11 0.002600 0.088477 0.490048 0.520889 0.504998 0.987188
12 0.001700 0.094080 0.505658 0.526513 0.515875 0.987291
13 0.001700 0.094199 0.496042 0.503482 0.499734 0.987419
14 0.001400 0.094274 0.488923 0.514194 0.501240 0.987217
15 0.001400 0.090643 0.499105 0.522496 0.510533 0.987548
16 0.001000 0.100787 0.498829 0.513390 0.506005 0.987340
17 0.001000 0.098315 0.481785 0.534815 0.506917 0.986888
18 0.000900 0.101438 0.492332 0.507231 0.499670 0.987248
19 0.000900 0.103375 0.486770 0.522228 0.503876 0.987124
20 0.000700 0.107590 0.498841 0.518479 0.508470 0.987172
21 0.000700 0.109080 0.495807 0.506695 0.501192 0.986912
22 0.000700 0.104284 0.491876 0.502678 0.497219 0.987169
23 0.000700 0.103310 0.509659 0.515801 0.512711 0.987454
24 0.000500 0.103671 0.489717 0.510177 0.499738 0.987075
25 0.000500 0.107423 0.504276 0.521157 0.512577 0.987289
26 0.000500 0.108173 0.502179 0.524638 0.513163 0.987316
27 0.000500 0.110980 0.499222 0.515265 0.507116 0.987186
28 0.000400 0.106286 0.498570 0.513658 0.506002 0.987346
29 0.000400 0.106577 0.495431 0.522764 0.508731 0.987418
30 0.000400 0.109099 0.503998 0.523299 0.513467 0.987505
31 0.000400 0.110884 0.504755 0.525978 0.515148 0.987427
32 0.000300 0.110531 0.508949 0.525442 0.517064 0.987497

|
habulaj/135474110907 | habulaj | 2024-07-02T07:19:29Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-02T07:19:26Z | Entry not found |
hasininawoda/check | hasininawoda | 2024-07-02T07:20:07Z | 0 | 0 | diffusers | [
"diffusers",
"tensorboard",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2024-07-02T07:19:47Z | ---
base_model: stabilityai/stable-diffusion-xl-base-1.0
library_name: diffusers
license: openrail++
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
instance_prompt: a photo of TOK person
widget: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - hasininawoda/check
<Gallery />
## Model description
These are hasininawoda/check LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of TOK person to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](hasininawoda/check/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
nguyenthanhdo/ViMath-PAL-deepseek-math-7B-LORA | nguyenthanhdo | 2024-07-02T07:22:16Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"generated_from_trainer",
"base_model:deepseek-ai/deepseek-math-7b-rl",
"license:other",
"region:us"
] | null | 2024-07-02T07:21:16Z | ---
base_model: deepseek-ai/deepseek-math-7b-rl
library_name: peft
license: other
tags:
- generated_from_trainer
model-index:
- name: workspace/axolotl/vinh/deepseek-ai_deepseek-math-7b-rl-lora-2024-07-01-15-56-34
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
base_model: deepseek-ai/deepseek-math-7b-rl
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer
load_in_8bit: false
load_in_4bit: false
strict: false
datasets:
- path: /workspace/axolotl/vinh/PAL/input_output_dsmath.json
type: input_output
dataset_prepared_path:
val_set_size: 0.05
eval_sample_packing: false
output_dir: /workspace/axolotl/vinh/deepseek-ai_deepseek-math-7b-rl-lora-2024-07-01-15-56-34
sequence_len: 2048
sample_packing: false
pad_to_sequence_len: false
adapter: lora
lora_model_dir:
lora_r: 64
lora_alpha: 128
lora_dropout: 0.05
lora_target_linear: true
lora_fan_in_fan_out:
wandb_project:
wandb_entity:
wandb_watch:
wandb_name:
wandb_log_model:
gradient_accumulation_steps: 128
micro_batch_size: 1
num_epochs: 3
optimizer: paged_adamw_32bit
lr_scheduler: cosine
learning_rate: 2e-4
train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: false
gradient_checkpointing: false
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
s2_attention:
loss_watchdog_threshold: 5.0
loss_watchdog_patience: 3
warmup_steps: 10
evals_per_epoch: 10
eval_table_size:
eval_max_new_tokens: 512
saves_per_epoch: 2
save_total_limit: 20
debug:
deepspeed:
weight_decay: 0.0
fsdp:
fsdp_config:
special_tokens:
pad_token: <|end▁of▁sentence|>
```
</details><br>
# workspace/axolotl/vinh/deepseek-ai_deepseek-math-7b-rl-lora-2024-07-01-15-56-34
This model is a fine-tuned version of [deepseek-ai/deepseek-math-7b-rl](https://huggingface.co/deepseek-ai/deepseek-math-7b-rl) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0276
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 128
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.4197 | 0.0095 | 1 | 0.4128 |
| 0.0885 | 0.1043 | 11 | 0.0781 |
| 0.0482 | 0.2086 | 22 | 0.0517 |
| 0.045 | 0.3129 | 33 | 0.0429 |
| 0.0425 | 0.4172 | 44 | 0.0400 |
| 0.0411 | 0.5214 | 55 | 0.0379 |
| 0.0348 | 0.6257 | 66 | 0.0359 |
| 0.0288 | 0.7300 | 77 | 0.0342 |
| 0.0339 | 0.8343 | 88 | 0.0331 |
| 0.0297 | 0.9386 | 99 | 0.0318 |
| 0.0281 | 1.0429 | 110 | 0.0312 |
| 0.027 | 1.1472 | 121 | 0.0303 |
| 0.023 | 1.2515 | 132 | 0.0298 |
| 0.0259 | 1.3558 | 143 | 0.0297 |
| 0.0232 | 1.4600 | 154 | 0.0300 |
| 0.0203 | 1.5643 | 165 | 0.0291 |
| 0.0241 | 1.6686 | 176 | 0.0284 |
| 0.0245 | 1.7729 | 187 | 0.0282 |
| 0.0222 | 1.8772 | 198 | 0.0277 |
| 0.0231 | 1.9815 | 209 | 0.0278 |
| 0.0175 | 2.0858 | 220 | 0.0276 |
| 0.0165 | 2.1901 | 231 | 0.0281 |
| 0.0174 | 2.2943 | 242 | 0.0281 |
| 0.021 | 2.3986 | 253 | 0.0279 |
| 0.0147 | 2.5029 | 264 | 0.0277 |
| 0.0162 | 2.6072 | 275 | 0.0277 |
| 0.0206 | 2.7115 | 286 | 0.0276 |
| 0.0241 | 2.8158 | 297 | 0.0276 |
| 0.0162 | 2.9201 | 308 | 0.0276 |
### Framework versions
- PEFT 0.11.1
- Transformers 4.41.1
- Pytorch 2.1.2+cu118
- Datasets 2.19.1
- Tokenizers 0.19.1 |
gglabs/Solar-kiosk-scenario-1-epoch | gglabs | 2024-07-02T15:01:06Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:chihoonlee10/T3Q-ko-solar-dpo-v7.0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-07-02T07:21:38Z | ---
base_model: chihoonlee10/T3Q-ko-solar-dpo-v7.0
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
---
# Uploaded model
- **Developed by:** gglabs
- **License:** apache-2.0
- **Finetuned from model :** chihoonlee10/T3Q-ko-solar-dpo-v7.0
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
YiDuo1999/Gemma-2-9b-medical | YiDuo1999 | 2024-07-02T10:06:51Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"license:gemma",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-07-02T07:22:29Z | ---
license: gemma
---
## Introduction
This repo contains Gemma-2-9b-Medical, a medical language model with 9 billion parameters. This model builds upon the foundation of Gemma-2-9b-base and has been tuned with diverse medical and general instructions. We also use the three strategies in the paper 'Efficient Continual Pre-training by Mitigating the Stability Gap' to mitigate the stability gap during instruction tuning, which boosts the model's medical task performance and reduces the computation consumption.
## 💻 Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
import torch
model_name = "YiDuo1999/Gemma-2-9b-medical"
device_map = 'auto'
model = AutoModelForCausalLM.from_pretrained( model_name, trust_remote_code=True,use_cache=False,device_map=device_map)
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
tokenizer.pad_token = tokenizer.eos_token
def askme(question):
sys_message = '''
You are an AI Medical Assistant trained on a vast dataset of health information. Please be thorough and
provide an informative answer. If you don't know the answer to a specific medical inquiry, advise seeking professional help.
'''
# Create messages structured for the chat template
messages = [{"role": "system", "content": sys_message}, {"role": "user", "content": question}]
# Applying chat template
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
outputs = model.generate(**inputs, max_new_tokens=100, use_cache=True)
# Extract and return the generated text, removing the prompt
response_text = tokenizer.batch_decode(outputs)[0].strip()
answer = response_text.split('<|im_start|>assistant')[-1].strip()
return answer
```
## 🏆 Evaluation
For question-answering tasks, we have
| Model | MMLU-Medical | PubMedQA | MedMCQA | MedQA-4-Option | Avg |
|:-------------------------------|:-------------|:---------|:--------|:---------------|:-----|
| Mistral-7B-instruct | 55.8 | 17.8 | 40.2 | 41.1 | 37.5 |
| Zephyr-7B-instruct-β | 63.3 | 46.0 | 43.0 | 48.5 | 48.7 |
| PMC-Llama-7B | 59.7 | 59.2 | 57.6 | 49.2 | 53.6 |
| Medalpaca-13B | 55.2 | 50.4 | 21.2 | 20.2 | 36.7 |
| AlpaCare-13B | 60.2 | 53.8 | 38.5 | 30.4 | 45.7 |
| BioMedGPT-LM 7B | 52.0 | 58.6 | 34.9 | 39.3 | 46.2 |
| Me-Llama-13B | - | 70.0 | 44.9 | 42.7 | - |
| Llama-3-8B instruct | 82.0 | 74.6 | 57.1 | 60.3 | 68.5 |
| JSL-Med-Sft-Llama-3-8B | 83.0 | 75.4 | 57.5 | 74.8 | 72.7 |
| GPT-3.5-turbo-1106 | 74.0 | 72.6 | 34.9 | 39.3 | 60.6 |
| GPT-4 | 85.5 | 69.2 | 69.5 | 83.9 | 77.0 |
| Gemma-2-9b-int | 75.0 | 76.0 | 40.3 | 48.9 | 60.0 |
| Gemma-2-9b-Medical | 75.0 | 76.0 | 61.3 | 59.7 | 68.0 |
| Llama-3-physician-8B instruct | 80.0 | 76.0 | 80.2 | 60.3 | 74.1 |
## Citation
```
@inproceedings{Guo2024EfficientCP,
title={Efficient Continual Pre-training by Mitigating the Stability Gap},
author={Yiduo Guo and Jie Fu and Huishuai Zhang and Dongyan Zhao and Yikang Shen},
year={2024},
url={https://api.semanticscholar.org/CorpusID:270688100}
}
``` |
EscheWang/GeneBPE | EscheWang | 2024-07-02T07:22:31Z | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | 2024-07-02T07:22:31Z | ---
license: mit
---
|
Abhi964/L3_Cube_Task_0_10epoch | Abhi964 | 2024-07-02T07:25:40Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"albert",
"text-classification",
"generated_from_trainer",
"base_model:ai4bharat/indic-bert",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-07-02T07:22:34Z | ---
license: mit
base_model: ai4bharat/indic-bert
tags:
- generated_from_trainer
model-index:
- name: Trial1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Trial1
This model is a fine-tuned version of [ai4bharat/indic-bert](https://huggingface.co/ai4bharat/indic-bert) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7663
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.42.3
- Pytorch 2.3.0+cu121
- Tokenizers 0.19.1
|
Nex432/project | Nex432 | 2024-07-02T07:31:44Z | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | 2024-07-02T07:23:35Z | ---
license: mit
---
|
joshnader/rho-math-7b-interpreter-v0.1-Q8_0-GGUF | joshnader | 2024-07-02T07:24:14Z | 0 | 0 | null | [
"gguf",
"nlp",
"math",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"base_model:microsoft/rho-math-7b-interpreter-v0.1",
"license:mit",
"region:us"
] | text-generation | 2024-07-02T07:23:43Z | ---
base_model: microsoft/rho-math-7b-interpreter-v0.1
language:
- en
license: mit
pipeline_tag: text-generation
tags:
- nlp
- math
- llama-cpp
- gguf-my-repo
---
# joshnader/rho-math-7b-interpreter-v0.1-Q8_0-GGUF
This model was converted to GGUF format from [`microsoft/rho-math-7b-interpreter-v0.1`](https://huggingface.co/microsoft/rho-math-7b-interpreter-v0.1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/microsoft/rho-math-7b-interpreter-v0.1) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo joshnader/rho-math-7b-interpreter-v0.1-Q8_0-GGUF --hf-file rho-math-7b-interpreter-v0.1-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo joshnader/rho-math-7b-interpreter-v0.1-Q8_0-GGUF --hf-file rho-math-7b-interpreter-v0.1-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo joshnader/rho-math-7b-interpreter-v0.1-Q8_0-GGUF --hf-file rho-math-7b-interpreter-v0.1-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo joshnader/rho-math-7b-interpreter-v0.1-Q8_0-GGUF --hf-file rho-math-7b-interpreter-v0.1-q8_0.gguf -c 2048
```
|
gglabs/Gemma-kiosk-scenario-2-epoch | gglabs | 2024-07-02T09:18:24Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"gemma",
"text-generation-inference",
"unsloth",
"en",
"base_model:gemmathon/gemma-2b-ko-dev-pbmt192",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-07-02T07:24:19Z | ---
base_model: gemmathon/gemma-2b-ko-dev-pbmt192
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- gemma
- gguf
---
# Uploaded model
- **Developed by:** gglabs
- **License:** apache-2.0
- **Finetuned from model :** gemmathon/gemma-2b-ko-dev-pbmt192
This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
abhayesian/LLama3_HarmBench_LAT_10 | abhayesian | 2024-07-02T16:44:15Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-07-02T07:25:28Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Laim/Llama-3-WebAgentMaps-8B-Instruct_v2 | Laim | 2024-07-02T07:31:03Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-07-02T07:25:30Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
yongjinchoi/sdxl-webtoon-model_0702 | yongjinchoi | 2024-07-02T07:25:56Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-02T07:25:56Z | Entry not found |
skaty5678/temp-SOP-full-deduped-810 | skaty5678 | 2024-07-02T07:26:44Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-02T07:26:44Z | Entry not found |
wangjin2000/esm2_t6_8M-lora-binding-sites_2024-07-02_09-26-54 | wangjin2000 | 2024-07-03T01:15:55Z | 0 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:facebook/esm2_t6_8M_UR50D",
"license:mit",
"region:us"
] | null | 2024-07-02T07:26:54Z | ---
base_model: facebook/esm2_t6_8M_UR50D
library_name: peft
license: mit
metrics:
- accuracy
- precision
- recall
- f1
tags:
- generated_from_trainer
model-index:
- name: esm2_t12_35M-lora-binding-sites_2024-07-02_09-26-54
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# esm2_t6_8M-lora-binding-sites_2024-07-02_09-26-54
This model is a fine-tuned version of [facebook/esm2_t6_8M_UR50D](https://huggingface.co/facebook/esm2_t6_8M_UR50D) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3706
- Accuracy: 0.8880
- Precision: 0.1563
- Recall: 0.7878
- F1: 0.2608
- Auc: 0.8392
- Mcc: 0.3192
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005701568055793089
- train_batch_size: 12
- eval_batch_size: 12
- seed: 8893
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | Auc | Mcc |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:---------:|:------:|:------:|:------:|:------:|
| 0.2569 | 1.0 | 14485 | 0.3706 | 0.8880 | 0.1563 | 0.7878 | 0.2608 | 0.8392 | 0.3192 |
### Framework versions
- PEFT 0.11.1
- Transformers 4.40.2
- Pytorch 2.3.1+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1 |
gglabs/Gemma-kiosk-scenario-3-epoch | gglabs | 2024-07-02T09:55:33Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"gemma",
"text-generation-inference",
"unsloth",
"en",
"base_model:gemmathon/gemma-2b-ko-dev-pbmt192",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-07-02T07:27:12Z | ---
base_model: gemmathon/gemma-2b-ko-dev-pbmt192
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- gemma
- gguf
---
# Uploaded model
- **Developed by:** gglabs
- **License:** apache-2.0
- **Finetuned from model :** gemmathon/gemma-2b-ko-dev-pbmt192
This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
sloshywings/my_food_model | sloshywings | 2024-07-02T07:39:39Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"base_model:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2024-07-02T07:27:31Z | ---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: my_food_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_food_model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6229
- Accuracy: 0.908
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.7124 | 0.992 | 62 | 2.5371 | 0.807 |
| 1.8389 | 2.0 | 125 | 1.8040 | 0.883 |
| 1.6124 | 2.976 | 186 | 1.6229 | 0.908 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
nguyenthanhdo/ViMath-PAL-Llama-3-8B-LORA | nguyenthanhdo | 2024-07-02T07:28:49Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"generated_from_trainer",
"base_model:NousResearch/Meta-Llama-3-8B-Instruct",
"license:other",
"region:us"
] | null | 2024-07-02T07:27:44Z | ---
base_model: NousResearch/Meta-Llama-3-8B-Instruct
library_name: peft
license: other
tags:
- generated_from_trainer
model-index:
- name: workspace/axolotl/vinh/NousResearch_Meta-Llama-3-8B-Instruct-lora-2024-07-01-14-28-39
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
base_model: NousResearch/Meta-Llama-3-8B-Instruct
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer
load_in_8bit: false
load_in_4bit: false
strict: false
datasets:
- path: /workspace/axolotl/vinh/PAL/input_output_llama3.json
type: input_output
dataset_prepared_path:
val_set_size: 0.05
eval_sample_packing: false
output_dir: /workspace/axolotl/vinh/NousResearch_Meta-Llama-3-8B-Instruct-lora-2024-07-01-14-28-39
sequence_len: 2048
sample_packing: false
pad_to_sequence_len: false
adapter: lora
lora_model_dir:
lora_r: 64
lora_alpha: 128
lora_dropout: 0.05
lora_target_linear: true
lora_fan_in_fan_out:
wandb_project:
wandb_entity:
wandb_watch:
wandb_name:
wandb_log_model:
gradient_accumulation_steps: 128
micro_batch_size: 1
num_epochs: 3
optimizer: paged_adamw_32bit
lr_scheduler: cosine
learning_rate: 2e-4
train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: false
gradient_checkpointing: false
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
s2_attention:
loss_watchdog_threshold: 5.0
loss_watchdog_patience: 3
warmup_steps: 10
evals_per_epoch: 10
eval_table_size:
eval_max_new_tokens: 512
saves_per_epoch: 2
save_total_limit: 20
debug:
deepspeed:
weight_decay: 0.0
fsdp:
fsdp_config:
special_tokens:
pad_token: <|end_of_text|>
```
</details><br>
# workspace/axolotl/vinh/NousResearch_Meta-Llama-3-8B-Instruct-lora-2024-07-01-14-28-39
This model is a fine-tuned version of [NousResearch/Meta-Llama-3-8B-Instruct](https://huggingface.co/NousResearch/Meta-Llama-3-8B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0392
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 128
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.5339 | 0.0095 | 1 | 0.5036 |
| 0.0879 | 0.1043 | 11 | 0.0813 |
| 0.0582 | 0.2086 | 22 | 0.0629 |
| 0.06 | 0.3129 | 33 | 0.0566 |
| 0.0593 | 0.4172 | 44 | 0.0514 |
| 0.054 | 0.5214 | 55 | 0.0483 |
| 0.0459 | 0.6257 | 66 | 0.0469 |
| 0.0397 | 0.7300 | 77 | 0.0460 |
| 0.0453 | 0.8343 | 88 | 0.0449 |
| 0.04 | 0.9386 | 99 | 0.0429 |
| 0.0338 | 1.0429 | 110 | 0.0418 |
| 0.0322 | 1.1472 | 121 | 0.0422 |
| 0.0275 | 1.2515 | 132 | 0.0416 |
| 0.0322 | 1.3558 | 143 | 0.0416 |
| 0.0266 | 1.4600 | 154 | 0.0404 |
| 0.0249 | 1.5643 | 165 | 0.0397 |
| 0.0292 | 1.6686 | 176 | 0.0393 |
| 0.031 | 1.7729 | 187 | 0.0385 |
| 0.0265 | 1.8772 | 198 | 0.0375 |
| 0.0273 | 1.9815 | 209 | 0.0375 |
| 0.0175 | 2.0858 | 220 | 0.0377 |
| 0.0168 | 2.1901 | 231 | 0.0396 |
| 0.0182 | 2.2943 | 242 | 0.0403 |
| 0.0201 | 2.3986 | 253 | 0.0397 |
| 0.0138 | 2.5029 | 264 | 0.0393 |
| 0.0173 | 2.6072 | 275 | 0.0392 |
| 0.0186 | 2.7115 | 286 | 0.0392 |
| 0.0209 | 2.8158 | 297 | 0.0392 |
| 0.0185 | 2.9201 | 308 | 0.0392 |
### Framework versions
- PEFT 0.11.1
- Transformers 4.41.1
- Pytorch 2.1.2+cu118
- Datasets 2.19.1
- Tokenizers 0.19.1 |
EmbeddedLLM/Phi-3-mini-4k-instruct-062024-onnx | EmbeddedLLM | 2024-07-02T09:27:28Z | 0 | 0 | null | [
"onnx",
"ONNX",
"DML",
"ONNXRuntime",
"phi3",
"nlp",
"conversational",
"custom_code",
"text-generation",
"en",
"license:mit",
"region:us"
] | text-generation | 2024-07-02T07:28:10Z | ---
license: mit
pipeline_tag: text-generation
tags:
- ONNX
- DML
- ONNXRuntime
- phi3
- nlp
- conversational
- custom_code
inference: false
language:
- en
---
# EmbeddedLLM/Phi-3-mini-4k-instruct-062024 ONNX
## Model Summary
This model is an ONNX-optimized version of [microsoft/Phi-3-mini-4k-instruct (June 2024)](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct), designed to provide accelerated inference on a variety of hardware using ONNX Runtime(CPU and DirectML).
DirectML is a high-performance, hardware-accelerated DirectX 12 library for machine learning, providing GPU acceleration for a wide range of supported hardware and drivers, including AMD, Intel, NVIDIA, and Qualcomm GPUs.
## ONNX Models
Here are some of the optimized configurations we have added:
- **ONNX model for int4 DirectML:** ONNX model for AMD, Intel, and NVIDIA GPUs on Windows, quantized to int4 using AWQ.
## Usage
### Installation and Setup
To use the EmbeddedLLM/Phi-3-mini-4k-instruct-062024 ONNX model on Windows with DirectML, follow these steps:
1. **Create and activate a Conda environment:**
```sh
conda create -n onnx python=3.10
conda activate onnx
```
2. **Install Git LFS:**
```sh
winget install -e --id GitHub.GitLFS
```
3. **Install Hugging Face CLI:**
```sh
pip install huggingface-hub[cli]
```
4. **Download the model:**
```sh
huggingface-cli download EmbeddedLLM/Phi-3-mini-4k-instruct-062024-onnx --include="onnx/directml/Phi-3-mini-4k-instruct-062024-int4/*" --local-dir .\Phi-3-mini-4k-instruct-062024-int4
```
5. **Install necessary Python packages:**
```sh
pip install numpy==1.26.4
pip install onnxruntime-directml
pip install --pre onnxruntime-genai-directml==0.3.0
```
6. **Install Visual Studio 2015 runtime:**
```sh
conda install conda-forge::vs2015_runtime
```
7. **Download the example script:**
```sh
Invoke-WebRequest -Uri "https://raw.githubusercontent.com/microsoft/onnxruntime-genai/main/examples/python/phi3-qa.py" -OutFile "phi3-qa.py"
```
8. **Run the example script:**
```sh
python phi3-qa.py -m .\Phi-3-mini-4k-instruct-062024-int4
```
### Hardware Requirements
**Minimum Configuration:**
- **Windows:** DirectX 12-capable GPU (AMD/Nvidia)
- **CPU:** x86_64 / ARM64
**Tested Configurations:**
- **GPU:** AMD Ryzen 8000 Series iGPU (DirectML)
- **CPU:** AMD Ryzen CPU
## Model Description
- **Developed by:** Microsoft
- **Model type:** ONNX
- **Language(s) (NLP):** Python, C, C++
- **License:** Apache License Version 2.0
- **Model Description:** This model is a conversion of the Phi-3-mini-4k-instruct-062024 for ONNX Runtime inference, optimized for DirectML.
|
haiefff/cartoon-anime-3 | haiefff | 2024-07-02T07:55:09Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"onnx",
"safetensors",
"vit",
"image-classification",
"autotrain",
"dataset:haiefff/anime-or-not",
"base_model:google/vit-base-patch16-224",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2024-07-02T07:29:34Z |
---
tags:
- autotrain
- image-classification
base_model: google/vit-base-patch16-224
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
datasets:
- haiefff/anime-or-not
---
# Model Trained Using AutoTrain
- Problem type: Image Classification
## Validation Metrics
No validation metrics available
|
sakasaku/SpaceInvadersNoFrameskip | sakasaku | 2024-07-02T07:29:42Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-02T07:29:42Z | Entry not found |
z3n7r4ck3r/filtered_dataset_20240702_093010 | z3n7r4ck3r | 2024-07-02T07:30:09Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-02T07:30:09Z | Entry not found |
itay-nakash/model_387dff9370_sweep_classic-totem-1173 | itay-nakash | 2024-07-02T07:30:36Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-02T07:30:36Z | Entry not found |
Rohithqwerty/mistral_film | Rohithqwerty | 2024-07-02T07:40:44Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"mistral",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/mistral-7b-v0.3-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-07-02T07:30:56Z | ---
base_model: unsloth/mistral-7b-v0.3-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- gguf
---
# Uploaded model
- **Developed by:** Rohithqwerty
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-v0.3-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
2utku2/brad | 2utku2 | 2024-07-02T07:31:25Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-02T07:31:25Z | Entry not found |
FazleHasan191/paligemma_attire_300_896 | FazleHasan191 | 2024-07-02T07:31:55Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-02T07:31:55Z | Entry not found |
nguyenthanhdo/ViMath-PAL-CodeQwen1.5-7B-LORA | nguyenthanhdo | 2024-07-02T07:34:01Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"generated_from_trainer",
"base_model:Qwen/CodeQwen1.5-7B-Chat",
"license:other",
"region:us"
] | null | 2024-07-02T07:32:54Z | ---
base_model: Qwen/CodeQwen1.5-7B-Chat
library_name: peft
license: other
tags:
- generated_from_trainer
model-index:
- name: workspace/axolotl/vinh/Qwen_CodeQwen1.5-7B-Chat-lora-2024-07-01-14-28-29
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
base_model: Qwen/CodeQwen1.5-7B-Chat
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer
load_in_8bit: false
load_in_4bit: false
strict: false
datasets:
- path: /workspace/axolotl/vinh/PAL/input_output_qwen.json
type: input_output
dataset_prepared_path:
val_set_size: 0.05
eval_sample_packing: false
output_dir: /workspace/axolotl/vinh/Qwen_CodeQwen1.5-7B-Chat-lora-2024-07-01-14-28-29
sequence_len: 2048
sample_packing: false
pad_to_sequence_len: false
adapter: lora
lora_model_dir:
lora_r: 64
lora_alpha: 128
lora_dropout: 0.05
lora_target_linear: true
lora_fan_in_fan_out:
wandb_project:
wandb_entity:
wandb_watch:
wandb_name:
wandb_log_model:
gradient_accumulation_steps: 128
micro_batch_size: 1
num_epochs: 3
optimizer: paged_adamw_32bit
lr_scheduler: cosine
learning_rate: 2e-4
train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: false
gradient_checkpointing: false
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
s2_attention:
loss_watchdog_threshold: 5.0
loss_watchdog_patience: 3
warmup_steps: 10
evals_per_epoch: 10
eval_table_size:
eval_max_new_tokens: 512
saves_per_epoch: 2
save_total_limit: 20
debug:
deepspeed:
weight_decay: 0.0
fsdp:
fsdp_config:
```
</details><br>
# workspace/axolotl/vinh/Qwen_CodeQwen1.5-7B-Chat-lora-2024-07-01-14-28-29
This model is a fine-tuned version of [Qwen/CodeQwen1.5-7B-Chat](https://huggingface.co/Qwen/CodeQwen1.5-7B-Chat) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0411
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 128
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.3706 | 0.0095 | 1 | 0.3514 |
| 0.1022 | 0.1043 | 11 | 0.0935 |
| 0.0636 | 0.2086 | 22 | 0.0700 |
| 0.0624 | 0.3129 | 33 | 0.0633 |
| 0.0638 | 0.4172 | 44 | 0.0579 |
| 0.0587 | 0.5214 | 55 | 0.0547 |
| 0.0512 | 0.6257 | 66 | 0.0520 |
| 0.0505 | 0.7300 | 77 | 0.0496 |
| 0.0431 | 0.8343 | 88 | 0.0481 |
| 0.0437 | 0.9386 | 99 | 0.0460 |
| 0.0346 | 1.0429 | 110 | 0.0450 |
| 0.0366 | 1.1472 | 121 | 0.0448 |
| 0.0329 | 1.2515 | 132 | 0.0443 |
| 0.0385 | 1.3558 | 143 | 0.0437 |
| 0.0326 | 1.4600 | 154 | 0.0438 |
| 0.0331 | 1.5643 | 165 | 0.0426 |
| 0.036 | 1.6686 | 176 | 0.0415 |
| 0.0352 | 1.7729 | 187 | 0.0411 |
| 0.0267 | 1.8772 | 198 | 0.0405 |
| 0.0304 | 1.9815 | 209 | 0.0404 |
| 0.0251 | 2.0858 | 220 | 0.0407 |
| 0.0197 | 2.1901 | 231 | 0.0423 |
| 0.0221 | 2.2943 | 242 | 0.0421 |
| 0.0252 | 2.3986 | 253 | 0.0413 |
| 0.019 | 2.5029 | 264 | 0.0411 |
| 0.0208 | 2.6072 | 275 | 0.0411 |
| 0.028 | 2.7115 | 286 | 0.0411 |
| 0.0296 | 2.8158 | 297 | 0.0411 |
| 0.0224 | 2.9201 | 308 | 0.0411 |
### Framework versions
- PEFT 0.11.1
- Transformers 4.41.1
- Pytorch 2.1.2+cu118
- Datasets 2.19.1
- Tokenizers 0.19.1 |
wanib26/finetunetest | wanib26 | 2024-07-02T07:34:17Z | 0 | 0 | null | [
"arxiv:1910.09700",
"region:us"
] | null | 2024-07-02T07:33:43Z | ---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
{}
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
itay-nakash/model_387dff9370_sweep_swept-pyramid-1174 | itay-nakash | 2024-07-02T07:34:24Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-02T07:34:24Z | Entry not found |
atmatechai/speecht5_tts_dataset_primer_female_1090 | atmatechai | 2024-07-02T08:37:27Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"speecht5",
"text-to-audio",
"endpoints_compatible",
"region:us"
] | text-to-audio | 2024-07-02T07:34:30Z | Entry not found |
FazleHasan191/paligemma_attire_500 | FazleHasan191 | 2024-07-02T11:12:08Z | 0 | 0 | null | [
"tensorboard",
"safetensors",
"region:us"
] | null | 2024-07-02T07:34:32Z | Entry not found |
manbeast3b/ZZZZZZZZdriver130 | manbeast3b | 2024-07-02T07:36:52Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-07-02T07:34:59Z | Entry not found |
lewy666/llava-hr-ChartInstruction | lewy666 | 2024-07-02T17:23:03Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"llava",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-07-02T07:35:03Z | Entry not found |
zavliju/tes_upload | zavliju | 2024-07-02T07:37:53Z | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | 2024-07-02T07:35:48Z | ---
license: mit
---
|
ankitvad/tempHF2 | ankitvad | 2024-07-02T09:17:01Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2024-07-02T07:36:00Z | ---
license: apache-2.0
---
|
hoangngx/vietnamese-correction-v2 | hoangngx | 2024-07-02T10:23:52Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"mbart",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-07-02T07:37:48Z | Entry not found |
Lemoooon/LexMatcher_8B | Lemoooon | 2024-07-02T07:50:55Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-07-02T07:37:50Z | Entry not found |
nguyenthanhdo/ViMath-PAL-Qwen2-7B-LORA | nguyenthanhdo | 2024-07-02T07:39:17Z | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"generated_from_trainer",
"base_model:Qwen/Qwen2-7B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2024-07-02T07:38:11Z | ---
base_model: Qwen/Qwen2-7B-Instruct
library_name: peft
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: workspace/axolotl/vinh/Qwen_Qwen2-7B-Instruct-lora-2024-07-01-14-29-26
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
base_model: Qwen/Qwen2-7B-Instruct
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer
load_in_8bit: false
load_in_4bit: false
strict: false
datasets:
- path: /workspace/axolotl/vinh/PAL/input_output_qwen.json
type: input_output
dataset_prepared_path:
val_set_size: 0.05
eval_sample_packing: false
output_dir: /workspace/axolotl/vinh/Qwen_Qwen2-7B-Instruct-lora-2024-07-01-14-29-26
sequence_len: 2048
sample_packing: false
pad_to_sequence_len: false
adapter: lora
lora_model_dir:
lora_r: 64
lora_alpha: 128
lora_dropout: 0.05
lora_target_linear: true
lora_fan_in_fan_out:
wandb_project:
wandb_entity:
wandb_watch:
wandb_name:
wandb_log_model:
gradient_accumulation_steps: 128
micro_batch_size: 1
num_epochs: 3
optimizer: paged_adamw_32bit
lr_scheduler: cosine
learning_rate: 2e-4
train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: false
gradient_checkpointing: false
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
s2_attention:
loss_watchdog_threshold: 5.0
loss_watchdog_patience: 3
warmup_steps: 10
evals_per_epoch: 10
eval_table_size:
eval_max_new_tokens: 512
saves_per_epoch: 2
save_total_limit: 20
debug:
deepspeed:
weight_decay: 0.0
fsdp:
fsdp_config:
```
</details><br>
# workspace/axolotl/vinh/Qwen_Qwen2-7B-Instruct-lora-2024-07-01-14-29-26
This model is a fine-tuned version of [Qwen/Qwen2-7B-Instruct](https://huggingface.co/Qwen/Qwen2-7B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0356
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 128
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.4503 | 0.0095 | 1 | 0.4264 |
| 0.0836 | 0.1043 | 11 | 0.0792 |
| 0.0532 | 0.2086 | 22 | 0.0566 |
| 0.0511 | 0.3129 | 33 | 0.0496 |
| 0.0511 | 0.4172 | 44 | 0.0457 |
| 0.0475 | 0.5214 | 55 | 0.0436 |
| 0.0435 | 0.6257 | 66 | 0.0420 |
| 0.0361 | 0.7300 | 77 | 0.0407 |
| 0.0406 | 0.8343 | 88 | 0.0391 |
| 0.0349 | 0.9386 | 99 | 0.0384 |
| 0.0304 | 1.0429 | 110 | 0.0373 |
| 0.0305 | 1.1472 | 121 | 0.0374 |
| 0.0251 | 1.2515 | 132 | 0.0365 |
| 0.0288 | 1.3558 | 143 | 0.0370 |
| 0.0251 | 1.4600 | 154 | 0.0366 |
| 0.0236 | 1.5643 | 165 | 0.0353 |
| 0.0266 | 1.6686 | 176 | 0.0353 |
| 0.0281 | 1.7729 | 187 | 0.0348 |
| 0.0246 | 1.8772 | 198 | 0.0340 |
| 0.0249 | 1.9815 | 209 | 0.0339 |
| 0.0169 | 2.0858 | 220 | 0.0349 |
| 0.0155 | 2.1901 | 231 | 0.0371 |
| 0.0178 | 2.2943 | 242 | 0.0369 |
| 0.0194 | 2.3986 | 253 | 0.0361 |
| 0.0139 | 2.5029 | 264 | 0.0357 |
| 0.0157 | 2.6072 | 275 | 0.0356 |
| 0.0197 | 2.7115 | 286 | 0.0357 |
| 0.0188 | 2.8158 | 297 | 0.0357 |
| 0.0163 | 2.9201 | 308 | 0.0356 |
### Framework versions
- PEFT 0.11.1
- Transformers 4.41.1
- Pytorch 2.1.2+cu118
- Datasets 2.19.1
- Tokenizers 0.19.1 |
quissuiven/donut-ktp-v2-test | quissuiven | 2024-07-02T07:52:50Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vision-encoder-decoder",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:naver-clova-ix/donut-base",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2024-07-02T07:39:23Z | ---
license: mit
base_model: naver-clova-ix/donut-base
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: donut-ktp-v2-test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# donut-ktp-v2-test
This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.42.3
- Pytorch 2.3.1+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
triplee/supernatural_dataset_negativeQA_3epo_model | triplee | 2024-07-02T07:40:38Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-07-02T07:39:56Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
z3n7r4ck3r/filtered_dataset_20240702_094001 | z3n7r4ck3r | 2024-07-02T07:40:00Z | 0 | 0 | null | [
"region:us"
] | null | 2024-07-02T07:40:00Z | Entry not found |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.