modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-26 18:27:55
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 499
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-26 18:27:32
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
ibunescu/Phi-2_GDPR_8_7e | ibunescu | 2024-02-27T08:42:25Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-02-27T08:42:23Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ibunescu/Phi-2_GDPR_8_3e_adapter | ibunescu | 2024-02-27T08:39:04Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-02-27T08:38:42Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
peter3571/llama2-only-lora-test | peter3571 | 2024-02-27T08:38:56Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-02-27T08:38:53Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
peter3571/llama2-glora-finetunined-test | peter3571 | 2024-02-27T08:36:56Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-02-27T08:36:50Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Squishcan/anadearmas | Squishcan | 2024-02-27T08:30:33Z | 1 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2024-02-27T08:20:57Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### anadearmas Dreambooth model trained by Squishcan with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
tomaszki/nous-gemma-three | tomaszki | 2024-02-27T08:30:32Z | 113 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-27T08:27:35Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
fzzhang/mistral_gsm8k_s | fzzhang | 2024-02-27T08:29:59Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"region:us"
] | null | 2024-02-27T01:31:36Z | ---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
base_model: mistralai/Mistral-7B-v0.1
model-index:
- name: mistral_gsm8k_s
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral_gsm8k_s
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- PEFT 0.7.2.dev0
- Transformers 4.37.0.dev0
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.0 |
hhs8746/sftestd2 | hhs8746 | 2024-02-27T08:29:52Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-02-27T08:29:41Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
kazuma313/cat_vs_dog_classification | kazuma313 | 2024-02-27T08:26:07Z | 198 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:cats_vs_dogs",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2024-02-26T08:21:33Z | ---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
datasets:
- cats_vs_dogs
model-index:
- name: cat_vs_dog_classification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cat_vs_dog_classification
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the cats_vs_dogs dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.0226
- eval_accuracy: 0.9944
- eval_runtime: 38.0768
- eval_samples_per_second: 61.481
- eval_steps_per_second: 1.943
- epoch: 1.2
- step: 705
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Framework versions
- Transformers 4.37.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
|
ThuyNT03/CS505_COQE_viT5_Prompting5_ASPOL_check_stable2 | ThuyNT03 | 2024-02-27T08:24:11Z | 104 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:VietAI/vit5-large",
"base_model:finetune:VietAI/vit5-large",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-02-27T07:20:41Z | ---
license: mit
base_model: VietAI/vit5-large
tags:
- generated_from_trainer
model-index:
- name: CS505_COQE_viT5_Prompting5_ASPOL_check_stable2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CS505_COQE_viT5_Prompting5_ASPOL_check_stable2
This model is a fine-tuned version of [VietAI/vit5-large](https://huggingface.co/VietAI/vit5-large) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.37.0
- Pytorch 2.1.2
- Datasets 2.1.0
- Tokenizers 0.15.1
|
nagyadam0616/zephyr-x-twitter-5epocs-full | nagyadam0616 | 2024-02-27T08:24:05Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-02-27T08:23:59Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
aeaee/SENTIENCE_Classification_Score_pytorch | aeaee | 2024-02-27T08:17:38Z | 106 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-02-13T08:53:21Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: SENTIENCE_Classification_Score_pytorch
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SENTIENCE_Classification_Score_pytorch
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.002
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 25 | 1.6458 | 0.14 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.1
|
PaulTbbr/dqn-SpaceInvadersNoFrameskip-v4 | PaulTbbr | 2024-02-27T08:17:04Z | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2024-02-27T08:16:29Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 655.00 +/- 272.44
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga PaulTbbr -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga PaulTbbr -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga PaulTbbr
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
Ombrea/rose_des_vents | Ombrea | 2024-02-27T08:16:18Z | 0 | 0 | null | [
"fr",
"license:mit",
"region:us"
] | null | 2024-02-27T07:54:06Z | ---
license: mit
language:
- fr
--- |
Denkiyohou/taxi3 | Denkiyohou | 2024-02-27T08:14:17Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2024-02-27T08:14:15Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: taxi3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Denkiyohou/taxi3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
luerhard/PopBERT | luerhard | 2024-02-27T08:13:11Z | 230 | 3 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"de",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-08-11T11:49:42Z | ---
license: mit
language:
- de
pipeline_tag: text-classification
metrics:
- f1
library_name: transformers
---
# PopBERT
PopBERT is a model for German-language populism detection in political speeches within the German Bundestag, based on the deepset/gbert-large model: https://huggingface.co/deepset/gbert-large
It is a multilabel model trained on a manually curated dataset of sentences from the 18th and 19th legislative periods.
In addition to capturing the foundational dimensions of populism, namely "anti-elitism" and "people-centrism," the model was also fine-tuned to identify the underlying ideological orientation as either "left-wing" or "right-wing."
# Prediction
The model outputs a Tensor of length 4.
The table connects the position of the predicted probability to its dimension.
| **Index** | **Dimension** |
|-----------|--------------------------|
| 0 | Anti-Elitism |
| 1 | People-Centrism |
| 2 | Left-Wing Host-Ideology |
| 3 | Right-Wing Host-Ideology |
# Usage Example
```python
import torch
from transformers import AutoModelForSequenceClassification
from transformers import AutoTokenizer
# load tokenizer
tokenizer = AutoTokenizer.from_pretrained("luerhard/PopBERT")
# load model
model = AutoModelForSequenceClassification.from_pretrained("luerhard/PopBERT")
# define text to be predicted
text = (
"Das ist Klassenkampf von oben, das ist Klassenkampf im Interesse von "
"Vermögenden und Besitzenden gegen die Mehrheit der Steuerzahlerinnen und "
"Steuerzahler auf dieser Erde."
)
# encode text with tokenizer
encodings = tokenizer(text, return_tensors="pt")
# predict
with torch.inference_mode():
out = model(**encodings)
# get probabilties
probs = torch.nn.functional.sigmoid(out.logits)
print(probs.detach().numpy())
```
```
[[0.8765146 0.34838045 0.983123 0.02148379]]
```
# Performance
To maximize performance, it is recommended to use the following thresholds per dimension:
```
[0.415961, 0.295400, 0.429109, 0.302714]
```
Using these thresholds, the model achieves the following performance on the test set:
| Dimension | Precision | Recall | F1 |
|---------------------|---------------|---------------|---------------|
| Anti-Elitism | 0.81 | 0.88 | 0.84 |
| People-Centrism | 0.70 | 0.73 | 0.71 |
| Left-Wing Ideology | 0.69 | 0.77 | 0.73 |
| Right-Wing Ideology | 0.68 | 0.66 | 0.67 |
| --- | --- | --- | --- |
| micro avg | 0.75 | 0.80 | 0.77 |
| macro avg | 0.72 | 0.76 | 0.74 |
|
ontocord/vbd-llama2-7B-50b-chat-mlx-4bit | ontocord | 2024-02-27T08:12:49Z | 3 | 2 | mlx | [
"mlx",
"safetensors",
"llama",
"en",
"vi",
"license:llama2",
"region:us"
] | null | 2024-02-27T02:54:02Z | ---
language:
- en
- vi
license: llama2
tags:
- mlx
---
# ontocord/vbd-llama2-7B-50b-chat-mlx-4bit
This model was converted to MLX format from [`LR-AI-Labs/vbd-llama2-7B-50b-chat`]().
Refer to the [original model card](https://huggingface.co/LR-AI-Labs/vbd-llama2-7B-50b-chat) for more details on the model.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("ontocord/vbd-llama2-7B-50b-chat-mlx-4bit")
response = generate(model, tokenizer, prompt="hello", verbose=True)
```
|
AsphyXIA/baarat-MOE-Hindi-v0.3 | AsphyXIA | 2024-02-27T08:12:08Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-27T08:05:52Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
hoangthethief/best_model | hoangthethief | 2024-02-27T08:02:10Z | 106 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:BAAI/bge-small-en-v1.5",
"base_model:finetune:BAAI/bge-small-en-v1.5",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-02-27T08:01:49Z | ---
license: mit
base_model: BAAI/bge-small-en-v1.5
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: best_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# best_model
This model is a fine-tuned version of [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2144
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 14 | 1.5615 | 0.7826 |
| No log | 2.0 | 28 | 1.2408 | 0.8913 |
| No log | 3.0 | 42 | 0.9712 | 0.9130 |
| No log | 4.0 | 56 | 0.7769 | 0.9783 |
| No log | 5.0 | 70 | 0.6251 | 1.0 |
| No log | 6.0 | 84 | 0.5102 | 1.0 |
| No log | 7.0 | 98 | 0.4342 | 1.0 |
| No log | 8.0 | 112 | 0.3669 | 1.0 |
| No log | 9.0 | 126 | 0.3205 | 1.0 |
| No log | 10.0 | 140 | 0.2853 | 1.0 |
| No log | 11.0 | 154 | 0.2599 | 1.0 |
| No log | 12.0 | 168 | 0.2426 | 1.0 |
| No log | 13.0 | 182 | 0.2309 | 1.0 |
| No log | 14.0 | 196 | 0.2245 | 1.0 |
| No log | 15.0 | 210 | 0.2224 | 1.0 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.0.1+cu117
- Tokenizers 0.15.0
|
Lounarisnia/dqn-SpaceInvadersNoFrameskip-v4 | Lounarisnia | 2024-02-27T08:01:49Z | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2024-02-27T08:01:13Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 599.50 +/- 137.25
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Lounarisnia -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Lounarisnia -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Lounarisnia
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
yuexishuihan/lora-sdxl-dog | yuexishuihan | 2024-02-27T07:58:52Z | 1 | 1 | diffusers | [
"diffusers",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2024-02-27T02:18:15Z |
---
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
- template:sd-lora
widget:
- text: 'A photo of sks dog in a bucket'
output:
url:
"image_0.png"
- text: 'A photo of sks dog in a bucket'
output:
url:
"image_1.png"
- text: 'A photo of sks dog in a bucket'
output:
url:
"image_2.png"
- text: 'A photo of sks dog in a bucket'
output:
url:
"image_3.png"
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of sks dog
license: openrail++
---
# SDXL LoRA DreamBooth - yuexishuihan/lora-sdxl-dog
<Gallery />
## Model description
These are yuexishuihan/lora-sdxl-dog LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of sks dog to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](yuexishuihan/lora-sdxl-dog/tree/main) them in the Files & versions tab.
|
kalobiralo/t5-grammar269k-model | kalobiralo | 2024-02-27T07:58:51Z | 106 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-02-27T07:58:10Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
amitjain171980/xlm-roberta-base-finetuned-panx-all | amitjain171980 | 2024-02-27T07:54:30Z | 127 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-02-27T07:42:58Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-all
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-all
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1533
- F1: 0.8514
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2975 | 1.0 | 574 | 0.1738 | 0.7941 |
| 0.1427 | 2.0 | 1148 | 0.1585 | 0.8281 |
| 0.0947 | 3.0 | 1722 | 0.1533 | 0.8514 |
### Framework versions
- Transformers 4.16.2
- Pytorch 2.1.0+cu121
- Datasets 1.16.1
- Tokenizers 0.15.2
|
tomaszki/nous-gemma-two | tomaszki | 2024-02-27T07:54:30Z | 111 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-27T07:51:44Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
alinerodrigues/wav2vec2-large-xlsr-mecita-coraa-portuguese-2-all-clean-09 | alinerodrigues | 2024-02-27T07:53:38Z | 1 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-02-27T02:13:12Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: wav2vec2-large-xlsr-mecita-coraa-portuguese-2-all-clean-09
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-mecita-coraa-portuguese-2-all-clean-09
This model is a fine-tuned version of [Edresson/wav2vec2-large-xlsr-coraa-portuguese](https://huggingface.co/Edresson/wav2vec2-large-xlsr-coraa-portuguese) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1435
- Wer: 0.0935
- Cer: 0.0274
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| 28.9568 | 1.0 | 67 | 4.5342 | 1.0 | 1.0 |
| 9.238 | 2.0 | 134 | 3.3773 | 0.9615 | 0.9278 |
| 3.8131 | 3.0 | 201 | 3.2552 | 0.9713 | 0.9672 |
| 3.8131 | 4.0 | 268 | 2.9818 | 0.9825 | 0.9845 |
| 3.7154 | 5.0 | 335 | 2.9048 | 1.0 | 1.0 |
| 3.2775 | 6.0 | 402 | 2.9244 | 1.0 | 1.0 |
| 3.2775 | 7.0 | 469 | 2.8494 | 1.0 | 1.0 |
| 3.295 | 8.0 | 536 | 2.8737 | 0.9829 | 0.9853 |
| 3.4359 | 9.0 | 603 | 2.8125 | 0.9648 | 0.9515 |
| 3.4359 | 10.0 | 670 | 2.8296 | 0.9704 | 0.9416 |
| 3.1631 | 11.0 | 737 | 2.7952 | 0.9690 | 0.9126 |
| 3.4425 | 12.0 | 804 | 2.7936 | 0.9746 | 0.9 |
| 3.4425 | 13.0 | 871 | 2.8312 | 0.9783 | 0.8831 |
| 3.1405 | 14.0 | 938 | 2.7702 | 0.9717 | 0.8519 |
| 3.1256 | 15.0 | 1005 | 2.7649 | 0.9769 | 0.8511 |
| 3.1256 | 16.0 | 1072 | 2.7452 | 0.9763 | 0.8156 |
| 3.3974 | 17.0 | 1139 | 2.7520 | 0.9769 | 0.8255 |
| 3.378 | 18.0 | 1206 | 2.7433 | 0.9835 | 0.7887 |
| 3.378 | 19.0 | 1273 | 2.7587 | 0.9845 | 0.7957 |
| 3.0588 | 20.0 | 1340 | 2.6809 | 0.9773 | 0.7508 |
| 3.2909 | 21.0 | 1407 | 2.6814 | 0.9812 | 0.7634 |
| 3.2909 | 22.0 | 1474 | 2.6797 | 0.9806 | 0.7662 |
| 3.2279 | 23.0 | 1541 | 2.6761 | 0.9710 | 0.7498 |
| 3.0068 | 24.0 | 1608 | 2.6769 | 0.9727 | 0.7580 |
| 3.0068 | 25.0 | 1675 | 2.6543 | 0.9743 | 0.7394 |
| 3.1098 | 26.0 | 1742 | 2.6652 | 0.9806 | 0.7654 |
| 3.2015 | 27.0 | 1809 | 2.6263 | 0.9707 | 0.7314 |
| 3.2015 | 28.0 | 1876 | 2.6353 | 0.9694 | 0.7274 |
| 2.9707 | 29.0 | 1943 | 2.5955 | 0.9727 | 0.7129 |
| 3.116 | 30.0 | 2010 | 2.5964 | 0.9677 | 0.7270 |
| 3.116 | 31.0 | 2077 | 2.5873 | 0.9704 | 0.7185 |
| 3.0265 | 32.0 | 2144 | 2.5629 | 0.9713 | 0.7012 |
| 3.0762 | 33.0 | 2211 | 2.5520 | 0.9717 | 0.6991 |
| 3.0762 | 34.0 | 2278 | 2.5075 | 0.9733 | 0.6937 |
| 3.0228 | 35.0 | 2345 | 2.5200 | 0.9707 | 0.7011 |
| 2.8856 | 36.0 | 2412 | 2.4967 | 0.9700 | 0.6913 |
| 2.8856 | 37.0 | 2479 | 2.5047 | 0.9717 | 0.7040 |
| 2.8122 | 38.0 | 2546 | 2.4714 | 0.9717 | 0.6900 |
| 2.9134 | 39.0 | 2613 | 2.4488 | 0.9667 | 0.6871 |
| 2.9134 | 40.0 | 2680 | 2.4372 | 0.9621 | 0.6826 |
| 2.7965 | 41.0 | 2747 | 2.4174 | 0.9661 | 0.6831 |
| 2.8698 | 42.0 | 2814 | 2.3973 | 0.9727 | 0.6733 |
| 2.8698 | 43.0 | 2881 | 2.3632 | 0.9681 | 0.6703 |
| 2.6243 | 44.0 | 2948 | 2.3325 | 0.9641 | 0.6784 |
| 2.7087 | 45.0 | 3015 | 2.3252 | 0.9730 | 0.6596 |
| 2.7087 | 46.0 | 3082 | 2.2806 | 0.9651 | 0.6643 |
| 2.6011 | 47.0 | 3149 | 2.2519 | 0.9592 | 0.6458 |
| 2.6838 | 48.0 | 3216 | 2.2211 | 0.9532 | 0.6363 |
| 2.6838 | 49.0 | 3283 | 2.1794 | 0.9433 | 0.6468 |
| 2.5863 | 50.0 | 3350 | 2.1214 | 0.9397 | 0.6294 |
| 2.5353 | 51.0 | 3417 | 2.0826 | 0.9499 | 0.6668 |
| 2.5353 | 52.0 | 3484 | 1.9885 | 0.9351 | 0.6165 |
| 2.3509 | 53.0 | 3551 | 1.8180 | 0.9104 | 0.5729 |
| 2.1853 | 54.0 | 3618 | 1.6297 | 0.8762 | 0.5067 |
| 2.1853 | 55.0 | 3685 | 1.3421 | 0.8603 | 0.4185 |
| 1.9876 | 56.0 | 3752 | 1.0128 | 0.7777 | 0.2953 |
| 1.5594 | 57.0 | 3819 | 0.7699 | 0.6663 | 0.2079 |
| 1.5594 | 58.0 | 3886 | 0.5548 | 0.4792 | 0.1294 |
| 1.1199 | 59.0 | 3953 | 0.4366 | 0.3768 | 0.0975 |
| 0.8571 | 60.0 | 4020 | 0.3568 | 0.3202 | 0.0805 |
| 0.8571 | 61.0 | 4087 | 0.3107 | 0.2556 | 0.0672 |
| 0.7263 | 62.0 | 4154 | 0.2792 | 0.2118 | 0.0573 |
| 0.5792 | 63.0 | 4221 | 0.2560 | 0.1838 | 0.0511 |
| 0.5792 | 64.0 | 4288 | 0.2452 | 0.1742 | 0.0494 |
| 0.5726 | 65.0 | 4355 | 0.2277 | 0.1624 | 0.0442 |
| 0.4892 | 66.0 | 4422 | 0.2126 | 0.1522 | 0.0416 |
| 0.4892 | 67.0 | 4489 | 0.2075 | 0.1456 | 0.0396 |
| 0.4639 | 68.0 | 4556 | 0.1982 | 0.1406 | 0.0382 |
| 0.435 | 69.0 | 4623 | 0.1916 | 0.1321 | 0.0365 |
| 0.435 | 70.0 | 4690 | 0.1888 | 0.1281 | 0.0354 |
| 0.4061 | 71.0 | 4757 | 0.1857 | 0.1215 | 0.0338 |
| 0.383 | 72.0 | 4824 | 0.1800 | 0.1179 | 0.0324 |
| 0.383 | 73.0 | 4891 | 0.1741 | 0.1182 | 0.0322 |
| 0.403 | 74.0 | 4958 | 0.1727 | 0.1159 | 0.0310 |
| 0.3688 | 75.0 | 5025 | 0.1711 | 0.1140 | 0.0312 |
| 0.3688 | 76.0 | 5092 | 0.1676 | 0.1126 | 0.0312 |
| 0.3522 | 77.0 | 5159 | 0.1645 | 0.1097 | 0.0305 |
| 0.3235 | 78.0 | 5226 | 0.1606 | 0.1094 | 0.0300 |
| 0.3235 | 79.0 | 5293 | 0.1615 | 0.1061 | 0.0302 |
| 0.3303 | 80.0 | 5360 | 0.1585 | 0.1041 | 0.0295 |
| 0.319 | 81.0 | 5427 | 0.1567 | 0.1061 | 0.0291 |
| 0.319 | 82.0 | 5494 | 0.1554 | 0.1038 | 0.0294 |
| 0.3257 | 83.0 | 5561 | 0.1557 | 0.1054 | 0.0296 |
| 0.3079 | 84.0 | 5628 | 0.1548 | 0.1005 | 0.0286 |
| 0.3079 | 85.0 | 5695 | 0.1512 | 0.1018 | 0.0289 |
| 0.302 | 86.0 | 5762 | 0.1513 | 0.1001 | 0.0286 |
| 0.2919 | 87.0 | 5829 | 0.1493 | 0.0965 | 0.0281 |
| 0.2919 | 88.0 | 5896 | 0.1479 | 0.0968 | 0.0283 |
| 0.2783 | 89.0 | 5963 | 0.1460 | 0.0968 | 0.0278 |
| 0.2862 | 90.0 | 6030 | 0.1453 | 0.0975 | 0.0280 |
| 0.2862 | 91.0 | 6097 | 0.1466 | 0.0942 | 0.0279 |
| 0.276 | 92.0 | 6164 | 0.1458 | 0.0958 | 0.0275 |
| 0.2854 | 93.0 | 6231 | 0.1462 | 0.0945 | 0.0273 |
| 0.2854 | 94.0 | 6298 | 0.1446 | 0.0935 | 0.0275 |
| 0.2887 | 95.0 | 6365 | 0.1445 | 0.0935 | 0.0275 |
| 0.2907 | 96.0 | 6432 | 0.1435 | 0.0935 | 0.0274 |
| 0.2907 | 97.0 | 6499 | 0.1450 | 0.0949 | 0.0272 |
| 0.298 | 98.0 | 6566 | 0.1454 | 0.0935 | 0.0272 |
| 0.2974 | 99.0 | 6633 | 0.1450 | 0.0939 | 0.0273 |
| 0.304 | 100.0 | 6700 | 0.1447 | 0.0935 | 0.0271 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.2.1+cu121
- Datasets 2.17.0
- Tokenizers 0.13.3
|
Paluzka/lunar_lander | Paluzka | 2024-02-27T07:53:17Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2024-02-27T07:52:32Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 245.65 +/- 24.38
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
casque/DAY_AND_NIGHT | casque | 2024-02-27T07:50:52Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-02-27T07:50:07Z | ---
license: creativeml-openrail-m
---
|
smutuvi/whisper-small-sw-common-voice-ndizi-782 | smutuvi | 2024-02-27T07:45:44Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:smutuvi/whisper-small-sw-common-voice",
"base_model:adapter:smutuvi/whisper-small-sw-common-voice",
"license:apache-2.0",
"region:us"
] | null | 2024-02-27T07:45:42Z | ---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
base_model: smutuvi/whisper-small-sw-common-voice
model-index:
- name: whisper-small-sw-common-voice-ndizi-782
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-small-sw-common-voice-ndizi-782
This model is a fine-tuned version of [smutuvi/whisper-small-sw-common-voice](https://huggingface.co/smutuvi/whisper-small-sw-common-voice) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9420
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.0393 | 1.0 | 51 | 2.5562 |
| 1.9426 | 2.0 | 102 | 2.4939 |
| 1.9615 | 3.0 | 153 | 2.4289 |
| 1.7863 | 4.0 | 204 | 2.3723 |
| 1.8188 | 5.0 | 255 | 2.3275 |
| 1.707 | 6.0 | 306 | 2.2926 |
| 1.8352 | 7.0 | 357 | 2.2644 |
| 1.6285 | 8.0 | 408 | 2.2394 |
| 1.6887 | 9.0 | 459 | 2.2177 |
| 1.6399 | 10.0 | 510 | 2.1987 |
| 1.6828 | 11.0 | 561 | 2.1825 |
| 1.5906 | 12.0 | 612 | 2.1671 |
| 1.6641 | 13.0 | 663 | 2.1534 |
| 1.6976 | 14.0 | 714 | 2.1405 |
| 1.4269 | 15.0 | 765 | 2.1295 |
| 1.4999 | 16.0 | 816 | 2.1192 |
| 1.5153 | 17.0 | 867 | 2.1091 |
| 1.5006 | 18.0 | 918 | 2.0997 |
| 1.5664 | 19.0 | 969 | 2.0915 |
| 1.4385 | 20.0 | 1020 | 2.0833 |
| 1.4577 | 21.0 | 1071 | 2.0754 |
| 1.3832 | 22.0 | 1122 | 2.0683 |
| 1.4214 | 23.0 | 1173 | 2.0608 |
| 1.3889 | 24.0 | 1224 | 2.0546 |
| 1.3982 | 25.0 | 1275 | 2.0480 |
| 1.4601 | 26.0 | 1326 | 2.0428 |
| 1.4422 | 27.0 | 1377 | 2.0364 |
| 1.3789 | 28.0 | 1428 | 2.0322 |
| 1.3603 | 29.0 | 1479 | 2.0266 |
| 1.5143 | 30.0 | 1530 | 2.0224 |
| 1.3834 | 31.0 | 1581 | 2.0186 |
| 1.338 | 32.0 | 1632 | 2.0134 |
| 1.2825 | 33.0 | 1683 | 2.0110 |
| 1.4324 | 34.0 | 1734 | 2.0066 |
| 1.3716 | 35.0 | 1785 | 2.0032 |
| 1.3713 | 36.0 | 1836 | 2.0000 |
| 1.419 | 37.0 | 1887 | 1.9971 |
| 1.3179 | 38.0 | 1938 | 1.9940 |
| 1.3147 | 39.0 | 1989 | 1.9914 |
| 1.4404 | 40.0 | 2040 | 1.9887 |
| 1.2711 | 41.0 | 2091 | 1.9869 |
| 1.2966 | 42.0 | 2142 | 1.9835 |
| 1.2696 | 43.0 | 2193 | 1.9814 |
| 1.3286 | 44.0 | 2244 | 1.9790 |
| 1.271 | 45.0 | 2295 | 1.9766 |
| 1.2839 | 46.0 | 2346 | 1.9746 |
| 1.3299 | 47.0 | 2397 | 1.9734 |
| 1.2858 | 48.0 | 2448 | 1.9711 |
| 1.2222 | 49.0 | 2499 | 1.9697 |
| 1.3 | 50.0 | 2550 | 1.9684 |
| 1.305 | 51.0 | 2601 | 1.9664 |
| 1.3894 | 52.0 | 2652 | 1.9644 |
| 1.2221 | 53.0 | 2703 | 1.9635 |
| 1.2858 | 54.0 | 2754 | 1.9632 |
| 1.2739 | 55.0 | 2805 | 1.9615 |
| 1.1671 | 56.0 | 2856 | 1.9606 |
| 1.1928 | 57.0 | 2907 | 1.9590 |
| 1.2995 | 58.0 | 2958 | 1.9576 |
| 1.2287 | 59.0 | 3009 | 1.9572 |
| 1.191 | 60.0 | 3060 | 1.9559 |
| 1.2061 | 61.0 | 3111 | 1.9551 |
| 1.1362 | 62.0 | 3162 | 1.9546 |
| 1.155 | 63.0 | 3213 | 1.9537 |
| 1.2597 | 64.0 | 3264 | 1.9528 |
| 1.267 | 65.0 | 3315 | 1.9519 |
| 1.1355 | 66.0 | 3366 | 1.9503 |
| 1.2612 | 67.0 | 3417 | 1.9500 |
| 1.2262 | 68.0 | 3468 | 1.9494 |
| 1.2408 | 69.0 | 3519 | 1.9484 |
| 1.2042 | 70.0 | 3570 | 1.9481 |
| 1.1686 | 71.0 | 3621 | 1.9479 |
| 1.2891 | 72.0 | 3672 | 1.9474 |
| 1.3271 | 73.0 | 3723 | 1.9471 |
| 1.3009 | 74.0 | 3774 | 1.9464 |
| 1.1288 | 75.0 | 3825 | 1.9460 |
| 1.2342 | 76.0 | 3876 | 1.9456 |
| 1.2581 | 77.0 | 3927 | 1.9450 |
| 1.0447 | 78.0 | 3978 | 1.9448 |
| 1.2492 | 79.0 | 4029 | 1.9449 |
| 1.2316 | 80.0 | 4080 | 1.9443 |
| 1.0901 | 81.0 | 4131 | 1.9442 |
| 1.1115 | 82.0 | 4182 | 1.9444 |
| 1.1942 | 83.0 | 4233 | 1.9435 |
| 1.1974 | 84.0 | 4284 | 1.9434 |
| 1.2287 | 85.0 | 4335 | 1.9431 |
| 1.1252 | 86.0 | 4386 | 1.9429 |
| 1.0746 | 87.0 | 4437 | 1.9431 |
| 1.1975 | 88.0 | 4488 | 1.9432 |
| 1.2231 | 89.0 | 4539 | 1.9426 |
| 1.1957 | 90.0 | 4590 | 1.9426 |
| 1.1388 | 91.0 | 4641 | 1.9428 |
| 1.198 | 92.0 | 4692 | 1.9426 |
| 1.1479 | 93.0 | 4743 | 1.9423 |
| 1.1635 | 94.0 | 4794 | 1.9423 |
| 1.1184 | 95.0 | 4845 | 1.9422 |
| 1.1971 | 96.0 | 4896 | 1.9421 |
| 1.1907 | 97.0 | 4947 | 1.9418 |
| 1.0373 | 98.0 | 4998 | 1.9419 |
| 1.1927 | 99.0 | 5049 | 1.9421 |
| 1.1475 | 100.0 | 5100 | 1.9420 |
### Framework versions
- PEFT 0.7.2.dev0
- Transformers 4.37.1
- Pytorch 2.2.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0 |
jenilsavaj/my-pet-cat-xzg | jenilsavaj | 2024-02-27T07:44:09Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2024-02-27T07:40:18Z | ---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Pet-Cat-XZG Dreambooth model trained by jenilsavaj following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: 23020203020042
Sample pictures of this concept:
.png)
.png)
.png)
.png)
.png)
|
rishabhjain16/whisper-small_to_kaggle_albanian | rishabhjain16 | 2024-02-27T07:43:10Z | 78 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"sq",
"dataset:Kaggle",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-02-26T22:58:58Z | ---
language:
- sq
license: apache-2.0
base_model: openai/whisper-small
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- Kaggle
metrics:
- wer
model-index:
- name: Whisper tiny Albanian Test
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Kaggle Albanian
type: Kaggle
args: 'config: sq, split: test'
metrics:
- name: Wer
type: wer
value: 35.183525075949994
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper tiny Albanian Test
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Kaggle Albanian dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4063
- Wer: 35.1835
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 3000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.5598 | 0.24 | 500 | 0.6847 | 55.6153 |
| 0.4449 | 0.49 | 1000 | 0.5154 | 44.8005 |
| 0.3986 | 0.73 | 1500 | 0.4648 | 40.5448 |
| 0.3684 | 0.97 | 2000 | 0.4242 | 37.3848 |
| 0.2517 | 1.22 | 2500 | 0.4143 | 35.9306 |
| 0.2467 | 1.46 | 3000 | 0.4063 | 35.1835 |
### Framework versions
- Transformers 4.39.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
|
amitjain171980/xlm-roberta-base-finetuned-panx-it | amitjain171980 | 2024-02-27T07:36:09Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-02-27T06:52:01Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-it
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.it
metrics:
- name: F1
type: f1
value: 0.8146938775510203
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-it
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2557
- F1: 0.8147
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.8205 | 1.0 | 70 | 0.3560 | 0.6954 |
| 0.2796 | 2.0 | 140 | 0.2721 | 0.7929 |
| 0.181 | 3.0 | 210 | 0.2557 | 0.8147 |
### Framework versions
- Transformers 4.16.2
- Pytorch 2.1.0+cu121
- Datasets 1.16.1
- Tokenizers 0.15.2
|
amitjain171980/xlm-roberta-base-finetuned-panx-fr | amitjain171980 | 2024-02-27T07:32:57Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-02-27T06:47:27Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-fr
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.fr
metrics:
- name: F1
type: f1
value: 0.8481992595085829
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2703
- F1: 0.8482
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.5798 | 1.0 | 191 | 0.3184 | 0.7987 |
| 0.2651 | 2.0 | 382 | 0.2918 | 0.8259 |
| 0.1798 | 3.0 | 573 | 0.2703 | 0.8482 |
### Framework versions
- Transformers 4.16.2
- Pytorch 2.1.0+cu121
- Datasets 1.16.1
- Tokenizers 0.15.2
|
gsstein/model-0-percent-human-opt-og | gsstein | 2024-02-27T07:29:57Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-02-27T07:29:55Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
amitjain171980/xlm-roberta-base-finetuned-panx-de-fr | amitjain171980 | 2024-02-27T07:23:41Z | 116 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-02-27T06:30:26Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1639
- F1: 0.8590
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2894 | 1.0 | 715 | 0.1794 | 0.8253 |
| 0.1468 | 2.0 | 1430 | 0.1554 | 0.8477 |
| 0.093 | 3.0 | 2145 | 0.1639 | 0.8590 |
### Framework versions
- Transformers 4.16.2
- Pytorch 2.1.0+cu121
- Datasets 1.16.1
- Tokenizers 0.15.2
|
andythetechnerd03/BERT-tiny_AI-Human | andythetechnerd03 | 2024-02-27T07:21:36Z | 110 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"code",
"en",
"dataset:andythetechnerd03/AI-human-text",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-02-21T06:00:19Z | ---
language:
- en
license: mit
tags:
- code
datasets:
- andythetechnerd03/AI-human-text
metrics:
- accuracy
- f1
- recall
- precision
- roc_auc
pipeline_tag: text-classification
---
|
vgarg/promo_prescriptive_harshal_trail_27_02_2024 | vgarg | 2024-02-27T07:15:50Z | 4 | 0 | setfit | [
"setfit",
"safetensors",
"xlm-roberta",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:intfloat/multilingual-e5-large",
"base_model:finetune:intfloat/multilingual-e5-large",
"model-index",
"region:us"
] | text-classification | 2024-02-27T02:07:33Z | ---
library_name: setfit
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
metrics:
- accuracy
widget:
- text: How does cannibalization within the RTEC category compare to other product
categories within the MT channel, influencing the overall volumelift?
- text: Can you identify the specific factors or challenges that contributed to the
decline in ROI within TT in 2022 compared to 2021?
- text: Which Sku cannibalizes higher margin Skus the most for CHEDRAUI channel_name?
- text: Can you compare the overall market share and competitive landscape of the
category more sensitive to internal cannibalization with other categories?
- text: Can you identify the key factors or challenges that have contributed to the
ROI decline within TT
pipeline_tag: text-classification
inference: true
base_model: intfloat/multilingual-e5-large
model-index:
- name: SetFit with intfloat/multilingual-e5-large
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: Unknown
type: unknown
split: test
metrics:
- type: accuracy
value: 0.9130434782608695
name: Accuracy
---
# SetFit with intfloat/multilingual-e5-large
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [intfloat/multilingual-e5-large](https://huggingface.co/intfloat/multilingual-e5-large) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [intfloat/multilingual-e5-large](https://huggingface.co/intfloat/multilingual-e5-large)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 3 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 2 | <ul><li>'Are there particular factors or trends contributing to the high level of cannibalization for certain brands in the SS category?'</li><li>'How does the degree of cannibalization vary among different SKUs in the RTEC ?'</li><li>'Which Sku cannibalizes higher margin Skus the most?'</li></ul> |
| 1 | <ul><li>'Are there plans to enhance promotional activities specific to the MT to mitigate the ROI decline in 2023?'</li><li>'What are the main reasons for ROI decline in 2022 in MT compared to 2021?'</li><li>'Are there changes in consumer preferences or trends that have impacted the Lift of Zucaritas, and how does this compare to other brands like Pringles or Frutela?'</li></ul> |
| 0 | <ul><li>'What type of promotions worked best for MT Walmart in 2022?'</li><li>'Which channel has the max ROI and Vol Lift when we run the Promotion for RTEC category?'</li><li>'Which sub_catg_nm have the highest ROI in 2022?'</li></ul> |
## Evaluation
### Metrics
| Label | Accuracy |
|:--------|:---------|
| **all** | 0.9130 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("vgarg/promo_prescriptive_harshal_trail_27_02_2024")
# Run inference
preds = model("Which Sku cannibalizes higher margin Skus the most for CHEDRAUI channel_name?")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:--------|:----|
| Word count | 7 | 15.8333 | 30 |
| Label | Training Sample Count |
|:------|:----------------------|
| 0 | 10 |
| 1 | 10 |
| 2 | 10 |
### Training Hyperparameters
- batch_size: (16, 16)
- num_epochs: (3, 3)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 20
- body_learning_rate: (2e-05, 2e-05)
- head_learning_rate: 2e-05
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:----:|:-------------:|:---------------:|
| 0.0133 | 1 | 0.3582 | - |
| 0.6667 | 50 | 0.0024 | - |
| 1.3333 | 100 | 0.0005 | - |
| 2.0 | 150 | 0.0004 | - |
| 2.6667 | 200 | 0.0002 | - |
### Framework Versions
- Python: 3.10.12
- SetFit: 1.0.3
- Sentence Transformers: 2.4.0
- Transformers: 4.37.2
- PyTorch: 2.1.0+cu121
- Datasets: 2.17.1
- Tokenizers: 0.15.2
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
sujith013/whisper-medium-tamil-VO | sujith013 | 2024-02-27T07:12:39Z | 76 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"ta",
"base_model:openai/whisper-medium",
"base_model:finetune:openai/whisper-medium",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-02-24T09:52:14Z | ---
language:
- ta
license: apache-2.0
base_model: openai/whisper-medium
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: ./whisper-medium-tamil-VO
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ./whisper-medium-tamil-VO
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1651
- Wer: 36.3852
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10
- training_steps: 100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.4805 | 0.15 | 25 | 0.2645 | 48.9687 |
| 0.2224 | 0.31 | 50 | 0.2017 | 41.7273 |
| 0.1776 | 0.46 | 75 | 0.1748 | 38.4330 |
| 0.1595 | 0.62 | 100 | 0.1651 | 36.3852 |
### Framework versions
- Transformers 4.37.0
- Pytorch 2.1.2
- Datasets 2.1.0
- Tokenizers 0.15.1
|
shivanikerai/Llama-2-7b-chat-hf-adapter-sku-title-ner-generation-reversed-v2.0 | shivanikerai | 2024-02-27T07:11:41Z | 0 | 0 | peft | [
"peft",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"base_model:adapter:meta-llama/Llama-2-7b-chat-hf",
"region:us"
] | null | 2024-02-27T07:10:31Z | ---
library_name: peft
base_model: meta-llama/Llama-2-7b-chat-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.8.2 |
jojo80565/ppo-SnowballTarget | jojo80565 | 2024-02-27T07:07:22Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] | reinforcement-learning | 2024-02-27T07:07:18Z | ---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: jojo80565/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
amitjain171980/xlm-roberta-base-finetuned-panx-de | amitjain171980 | 2024-02-27T07:05:11Z | 126 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-02-26T11:44:58Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8625641025641025
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1350
- F1: 0.8626
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2585 | 1.0 | 525 | 0.1580 | 0.8255 |
| 0.1282 | 2.0 | 1050 | 0.1381 | 0.8447 |
| 0.0805 | 3.0 | 1575 | 0.1350 | 0.8626 |
### Framework versions
- Transformers 4.16.2
- Pytorch 2.1.0+cu121
- Datasets 1.16.1
- Tokenizers 0.15.2
|
sofiapaklina/grdmr_test_zoo4 | sofiapaklina | 2024-02-27T07:02:22Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2024-02-26T10:31:05Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0
- PEFT 0.5.0
- PEFT 0.5.0
- PEFT 0.5.0
- PEFT 0.5.0
- PEFT 0.5.0
|
bikesandrobots/ppo-LunarLander-v2.1 | bikesandrobots | 2024-02-27T07:02:20Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2024-02-27T07:02:02Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 280.13 +/- 16.09
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
charanhu/gemma-Code-Instruct-Finetune-test | charanhu | 2024-02-27T07:00:20Z | 112 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-27T06:56:49Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Isaachhe/phi-2_dev | Isaachhe | 2024-02-27T06:59:19Z | 111 | 0 | transformers | [
"transformers",
"pytorch",
"phi",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-10T15:27:09Z | Copied from https://huggingface.co/susnato/phi-2 commit@9070ddb4fce238899ddbd2aca1faf6a0aeb6e444.
This model can be loaded using HuggingFace `transformers` [commit@4ab5fb8941a38d172b3883c152c34ae2a0b83a68](https://github.com/huggingface/transformers/tree/4ab5fb8941a38d172b3883c152c34ae2a0b83a68).
Below is the original introduction, which may be expired now.
----------------------------------------------------
**DISCLAIMER**: I don't own the weights to this model, this is a property of Microsoft and taken from their official repository : [microsoft/phi-2](https://huggingface.co/microsoft/phi-2).
The sole purpose of this repository is to use this model through the `transformers` API or to load and use the model using the HuggingFace `transformers` library.
# Usage
First make sure you have the latest version of the `transformers` installed.
```
pip install -U transformers
```
Then use the transformers library to load the model from the library itself
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("susnato/phi-2")
tokenizer = AutoTokenizer.from_pretrained("susnato/phi-2")
inputs = tokenizer('''def print_prime(n):
"""
Print all primes between 1 and n
"""''', return_tensors="pt", return_attention_mask=False)
outputs = model.generate(**inputs, max_length=200)
text = tokenizer.batch_decode(outputs)[0]
print(text)
``` |
ryusangwon/5777_Llama-2-7b-hf | ryusangwon | 2024-02-27T06:59:01Z | 0 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"dataset:samsum",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"region:us"
] | null | 2024-02-27T06:58:55Z | ---
base_model: meta-llama/Llama-2-7b-hf
tags:
- generated_from_trainer
datasets:
- samsum
model-index:
- name: 5777_Llama-2-7b-hf
results: []
library_name: peft
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 5777_Llama-2-7b-hf
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the samsum dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 1
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- PEFT 0.4.0
- Transformers 4.36.2
- Pytorch 2.0.1+cu117
- Datasets 2.15.0
- Tokenizers 0.15.0
|
nagyadam0616/zephyr-x-twitter-5epocs | nagyadam0616 | 2024-02-27T06:47:20Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-02-27T06:42:44Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
lgodwangl/test10 | lgodwangl | 2024-02-27T06:41:13Z | 3 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-26T19:57:00Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
nagyadam0616/zephyr-x-twitter-8epocs | nagyadam0616 | 2024-02-27T06:41:05Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-02-27T06:40:35Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
stablediffusionapi/anything-inkbase | stablediffusionapi | 2024-02-27T06:39:26Z | 30 | 2 | diffusers | [
"diffusers",
"modelslab.com",
"stable-diffusion-api",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2024-02-27T06:37:21Z | ---
license: creativeml-openrail-m
tags:
- modelslab.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
---
# Anything Ink_base 万象熔炉 API Inference

## Get API Key
Get API key from [ModelsLab API](http://modelslab.com), No Payment needed.
Replace Key in below code, change **model_id** to "anything-inkbase"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://modelslab.com/docs)
Try model for free: [Generate Images](https://modelslab.com/models/anything-inkbase)
Model link: [View model](https://modelslab.com/models/anything-inkbase)
View all models: [View Models](https://modelslab.com/models)
import requests
import json
url = "https://modelslab.com/api/v6/images/text2img"
payload = json.dumps({
"key": "your_api_key",
"model_id": "anything-inkbase",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "embeddings_model_id",
"lora": "lora_model_id",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
> Use this coupon code to get 25% off **DMGG0RBN** |
doubledsbv/KafkaLM-7B-DARE_TIES-LaserRMT-QLoRA-DPO-v0.5-AWQ | doubledsbv | 2024-02-27T06:30:30Z | 79 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"llama2",
"deutsch",
"german",
"seedbox",
"conversational",
"de",
"en",
"dataset:seedboxai/multitask_german_examples_32k",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"awq",
"region:us"
] | text-generation | 2024-02-19T19:03:42Z | ---
library_name: transformers
tags:
- llama2
- deutsch
- german
- seedbox
license: llama2
datasets:
- seedboxai/multitask_german_examples_32k
language:
- de
- en
pipeline_tag: text-generation
---

# KafkaLM-7B-DARE_TIES-LaserRMT-QLoRA-DPO-v0.5
**KafkaLM 7b** is a Mistral 7b model - further pre-trained on a large German dataset from Björn Plüster and LAION. [leo-mistral-hessianai-7b](https://huggingface.co/LeoLM/leo-mistral-hessianai-7b) - which was finetuned on an ensemble of popular high-quality open-source instruction sets (translated from English to German).
KafkaLM 7b is a [Seedbox](https://huggingface.co/seedboxai) project trained by [Dennis Dickmann](https://huggingface.co/doubledsbv).
**Why Kafka?**
The models are proficient, yet creative, and have some tendencies to linguistically push boundaries 😊
## THE MODEL CAN BE TESTET HERE [Kafka-7B HF Space](https://huggingface.co/spaces/doubledsbv/Kafka-7B-DARE-TIES-QLoRa-LaserRMT-DPO)
## Model Details
The purpose of releasing the **KafkaLM series** is to contribute to the German AI community with a set of fine-tuned LLMs that are easy to use in everyday applications across a variety of tasks.
The main goal was to provide LLMs proficient in German, especially to be used in German-speaking business contexts where English alone is not sufficient.
## DPO Training with laserRMT w/ Q-Lora
Based on the brilliant work from [laserRMT](https://github.com/cognitivecomputations/laserRMT/) team, I used the SNR implementation for identifying candiate layers to be used for the DPO training.
### Dataset
I used a 8k filtered version of the following [seedboxai/multitask_german_examples_32k](https://huggingface.co/datasets/seedboxai/multitask_german_examples_32k)
### Prompt Format
This model follows the subsequent prompt format:
```
<|system|>
Du bist ein freundlicher und hilfsbereiter KI-Assistent. Du beantwortest Fragen faktenorientiert und präzise, ohne dabei relevante Fakten auszulassen.</s>
<|user|>
Welche Möglichkeiten der energetischen Sanierung habe ich neben Solar und Energiespeicher?</s>
<|assistant|>
```
## 🧩 Configuration
```yaml
models:
- model: mistralai/Mistral-7B-v0.1
# no parameters necessary for base model
- model: seedboxai/KafkaLM-7B-German-V0.1
parameters:
density: 0.65
weight: 0.50
- model: mlabonne/Monarch-7B
parameters:
density: 0.60
weight: 0.30
- model: mayflowergmbh/Wiedervereinigung-7b-dpo-laser
parameters:
density: 0.60
weight: 0.20
merge_method: dare_ties
base_model: mistralai/Mistral-7B-v0.1
parameters:
int8_mask: true
dtype: bfloat16
random_seed: 0
```
## 💻 Usage (fast vLLM inference example)
```python
!pip install -qU vllm
import torch
from vllm import LLM, SamplingParams
sampling_params = SamplingParams(
temperature=0.7,
top_p=0.95,
top_k=50,
max_tokens=512,
)
llm = LLM(model="doubledsbv/KafkaLM-7B-DARE_TIES-DPO-v0.5-AWQ", quantization = "awq", dtype=torch.float16)
def generate_prompt(input, sys_prompt = None):
prompt = ''
if not sys_prompt:
sys_prompt = "Du bist ein freundlicher und hilfsbereiter KI-Assistent. Du beantwortest Fragen faktenorientiert, präzise und ausführlich."
prompt += f"<|system|>\n{sys_prompt.strip()}</s>\n"
prompt += f"<|user|>\n{input.strip()}</s>\n"
prompt += f"<|assistant|>\n"
return prompt
outputs = llm.generate(generate_prompt("Was ist der Unterschied zwischen Ironie und Sarkasmus?"), sampling_params)
primt(outputs[0].outputs[0].text.strip())
```
## Disclaimer
The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model.
This model should only be used for research purposes. The original Llama2 license and all restrictions of datasets used to train this model apply. |
rootxhacker/red-llama2-qlora | rootxhacker | 2024-02-27T06:27:57Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-02-27T06:27:50Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
yhzhang3/detect-gpt | yhzhang3 | 2024-02-27T06:23:08Z | 169 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-02-27T06:22:53Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
bikesandrobots/ppo-LunarLander-v2 | bikesandrobots | 2024-02-27T06:15:25Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2024-02-27T06:15:07Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 282.15 +/- 18.40
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Arpan908/Arpan | Arpan908 | 2024-02-27T05:57:14Z | 0 | 0 | null | [
"finance",
"summarization",
"ae",
"dataset:teknium/OpenHermes-2.5",
"arxiv:1910.09700",
"region:us"
] | summarization | 2024-02-27T05:52:56Z | ---
datasets:
- teknium/OpenHermes-2.5
language:
- ae
metrics:
- bertscore
pipeline_tag: summarization
tags:
- finance
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Annuu/my-pet-dog | Annuu | 2024-02-27T05:42:50Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2024-02-27T05:38:16Z | ---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Pet-Dog Dreambooth model trained by Annuu following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: GoX19932gAS
Sample pictures of this concept:

|
FINNUMBER/Yi-Ko-6B-Finch-SA-FULL-Hyper-epoch3 | FINNUMBER | 2024-02-27T05:42:46Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-26T17:33:53Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
twintransition/ppo-LunarLander-v2 | twintransition | 2024-02-27T05:35:33Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2024-02-27T05:35:12Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 251.30 +/- 13.77
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
LinWeizheDragon/PreFLMR_ViT-G | LinWeizheDragon | 2024-02-27T05:34:51Z | 709 | 8 | transformers | [
"transformers",
"safetensors",
"flmr",
"feature-extraction",
"retrieval",
"multi-modal",
"knowledge-based visual question answering",
"FLMR",
"PreFLMR",
"custom_code",
"en",
"arxiv:2402.08327",
"license:mit",
"region:us"
] | feature-extraction | 2024-02-20T02:57:11Z | ---
library_name: transformers
license: mit
language:
- en
tags:
- retrieval
- multi-modal
- knowledge-based visual question answering
- FLMR
- PreFLMR
---
# PreFLMR model card
PreFLMR is an open-source model for multimodal knowledge retrieval. It is a transformer-based model that uses a combination of text and image inputs to retrieve relevant documents from a large corpus.
## Model Details
### Model Description
- **Model type:** FLMRModelForRetrieval
- **Language(s) (NLP):** English
- **License:** MIT License
### Paper and resources for more detail
- **Blog Post for quick overview:** https://www.jinghong-chen.net/preflmr-sota-open-sourced-multi/
- **Paper:** https://arxiv.org/abs/2402.08327
- **Gradio Demo:** https://u60544-b8d4-53eaa55d.westx.seetacloud.com:8443/
- **Repository:** https://github.com/LinWeizheDragon/FLMR
- **Project Page:** https://preflmr.github.io/
## Uses
### Direct Use
This model can be used directly to retrieve documents from a large corpus using a combination of text and image input queries. The retrieval usage can be found in the [official implementation](https://github.com/LinWeizheDragon/FLMR).
### Downstream Use
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
This model can be used combined with language models to create a retrieval-augmented language model. The use for Knowledge-based VQA can be found in [RAVQA](https://github.com/linweizhedragon/retrieval-augmented-visual-question-answering)
## How to Get Started with the Model
For details of training, indexing, and performing retrieval, please refer to [here](https://github.com/LinWeizheDragon/FLMR).
## Training datasets
The model is pre-trained on three types of tasks with a total of nine datasets:
1. Image to Text retrieval: WIT, KVQA, and CC3M
2. Question to Text retrieval: MSMARCO
3. Image & Question to Text retrieval: LLaVA, OVEN, OKVQA, Infoseek and E-VQA
These datasets were converted to retrieval format. For details on the dataset split and conversion process, please refer to the paper [PreFLMR: Scaling Up Fine-Grained Late-Interaction Multi-modal Retrievers](https://arxiv.org/abs/2402.08327). We will release the proprocessed datasets soon.
## Evaluation datasets
We evaluate our models on WIT, LLaVA, OVEN, KVQA, IGLUE (subset of WIT), Infoseek, E-VQA, OKVQA and MSMARCO.
| Model | Vision Encoder | Text Encoder | Checkpoint Name | No. Param. | WIT | LLaVA | OVEN | KVQA | IGLUE | Infoseek | E-VQA | OKVQA | MSMARCO |
|---------|----------------|--------------|-------------------------------------------------------------|-------|-------|--------|-------|-------|-------|----------|-------|--------|-------|
| PreFLMR | ViT-B | Base-v2 | [LinWeizheDragon/PreFLMR_ViT-B](https://huggingface.co/LinWeizheDragon/PreFLMR_ViT-B) | 327M | 41.7 | 67.2 | 46.3 | 28.6 | 57.3 | 48.8 | 67.9 | 66.1 | 79.5 |
| PreFLMR | ViT-L | Base-v2 | [LinWeizheDragon/PreFLMR_ViT-L](https://huggingface.co/LinWeizheDragon/PreFLMR_ViT-L) | 543M | 60.5 | 71.8 | 59.8 | 43.6 | 69.2 | 57.9 | 70.8 | 68.5 | 78.7 |
| PreFLMR | ViT-G | Base-v2 | [LinWeizheDragon/PreFLMR_ViT-G](https://huggingface.co/LinWeizheDragon/PreFLMR_ViT-G) | 2.1B | 61.5 | 72.4 | 63.4 | 42.1 |71.5 | 59.6 | 73.1 | 68.6 | 78.6 |
For the evaluation metrics, WIT uses Recall@10, IGLUE uses Recall@1, and all the rest datasets use Recall@5.
## Citation
**BibTeX:**
```
@article{Lin_Mei_Chen_Byrne_2024,
title={PreFLMR: Scaling Up Fine-Grained Late-Interaction Multi-modal Retrievers},
url={http://arxiv.org/abs/2402.08327},
number={arXiv:2402.08327},
publisher={arXiv},
author={Lin, Weizhe and Mei, Jingbiao and Chen, Jinghong and Byrne, Bill},
year={2024}}
``` |
LinWeizheDragon/FLMR | LinWeizheDragon | 2024-02-27T05:33:20Z | 244 | 1 | transformers | [
"transformers",
"safetensors",
"flmr",
"feature-extraction",
"retrieval",
"multi-modal",
"knowledge-based visual question answering",
"FLMR",
"PreFLMR",
"custom_code",
"en",
"license:mit",
"region:us"
] | feature-extraction | 2024-02-20T03:05:50Z | ---
library_name: transformers
license: mit
language:
- en
tags:
- retrieval
- multi-modal
- knowledge-based visual question answering
- FLMR
- PreFLMR
---
# FLMR model card
FLMR is an open-source model for multimodal knowledge retrieval. It is a transformer-based model that uses a combination of text and image inputs to retrieve relevant documents from a large corpus.
## Model Details
### Model Description
- **Model type:** FLMRModelForRetrieval
- **Language(s) (NLP):** English
- **License:** MIT License
### Paper and resources for more detail
- **Blog Post for quick overview:** https://www.jinghong-chen.net/fined-grained-late-interaction-multimodal-retrieval-flmr/
- **Paper:** https://openreview.net/forum?id=IWWWulAX7g
- **Repository:** https://github.com/LinWeizheDragon/FLMR
## Uses
### Direct Use
This model can be used directly to retrieve documents from a large corpus using a combination of text and image input queries. The retrieval usage can be found in the [official implementation](https://github.com/LinWeizheDragon/FLMR).
### Downstream Use
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
This model can be used combined with language models to create a retrieval-augmented language model. The use for Knowledge-based VQA can be found in [RAVQA](https://github.com/linweizhedragon/retrieval-augmented-visual-question-answering)
## How to Get Started with the Model
For details of training, indexing, and performing retrieval, please refer to [here](https://github.com/LinWeizheDragon/FLMR).
## Training datasets
The model is pre-trained on
1. Image to Text retrieval: WIT
3. Image & Question to Text retrieval: OKVQA
For details on the dataset split and conversion process, please refer to the paper [Fine-grained Late-interaction Multi-modal Retrieval for Retrieval Augmented Visual Question Answering](https://openreview.net/forum?id=IWWWulAX7g).
The processed datasets are:
- https://huggingface.co/datasets/BByrneLab/OKVQA_FLMR_preprocessed_data
- https://huggingface.co/datasets/BByrneLab/OKVQA_FLMR_preprocessed_GoogleSearch_passages
## Evaluation datasets
The model is evaluated on OKVQA, Infoseek, and FVQA.
Please find the evaluation results in [the paper](https://openreview.net/forum?id=IWWWulAX7g).
## Citation
**BibTeX:**
```
@inproceedings{
lin2023finegrained,
title={Fine-grained Late-interaction Multi-modal Retrieval for Retrieval Augmented Visual Question Answering},
author={Weizhe Lin and Jinghong Chen and Jingbiao Mei and Alexandru Coca and Bill Byrne},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems},
year={2023},
url={https://openreview.net/forum?id=IWWWulAX7g}
}
``` |
Ubaidbhat/Finance | Ubaidbhat | 2024-02-27T05:32:29Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-02-27T05:32:25Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Prateekjain24/autotrain-fco56-qnzow | Prateekjain24 | 2024-02-27T05:28:04Z | 1 | 0 | diffusers | [
"diffusers",
"text-to-image",
"autotrain",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
] | text-to-image | 2024-02-27T05:28:01Z |
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: Propertyguru marketing banner
tags:
- text-to-image
- diffusers
- autotrain
inference: true
---
# DreamBooth trained by AutoTrain
Text encoder was not trained.
|
LinWeizheDragon/ColBERT-v2 | LinWeizheDragon | 2024-02-27T05:21:41Z | 131 | 1 | transformers | [
"transformers",
"safetensors",
"flmr",
"ColBERT",
"Passage Retrieval",
"en",
"arxiv:2112.01488",
"arxiv:2104.08663",
"arxiv:2402.08327",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2024-02-27T04:12:09Z | ---
license: mit
language:
- en
tags:
- ColBERT
- Passage Retrieval
---
# Model Card for ColBERT
<!-- Provide a quick summary of what the model is/does. -->
`ColBERT-v2` is a passage retrieval model that leverages late interaction to allow token-level interaction between query and document embeddings. It outperforms traditional retrievers (such as Dense Passage Retrieval) that encode queries and documents into one-dimensional embeddings.
With the approval of the original author, this model is re-implemented with the huggingface-compatible implementation of [PreFLMR](https://preflmr.github.io/). All the weights are imported from the [official ColBERTv2 checkpoint](https://github.com/stanford-futuredata/ColBERT?tab=readme-ov-file).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** Keshav Santhanam, Omar Khattab, Jon Saad-Falcon, Christopher Potts, Matei Zaharia
- **Re-implemented by:** Weizhe Lin, Howard Mei, Jinghong Chen, Bill Byrne
- **Model type:** FLMRModelForRetrieval
- **Language(s) (NLP):** English
- **License:** MIT License
### Model Sources
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/stanford-futuredata/ColBERT
- **Repository of implementation:** https://github.com/LinWeizheDragon/FLMR
- **Paper:** https://arxiv.org/abs/2112.01488
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
This model can be used directly to retrieve documents from a large corpus using text queries. The retrieval usage can be found in the [PreFLMR implementation](https://github.com/LinWeizheDragon/FLMR).
### Downstream Use
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
This model can be used combined with language models to create a retrieval-augmented language model.
## How to Get Started with the Model
For details of training, indexing, and performing retrieval, please refer to [here](https://github.com/LinWeizheDragon/FLMR).
1. Install the [FLMR package](https://github.com/LinWeizheDragon/FLMR).
2. A simple example use of this model:
```python
from flmr import FLMRConfig, FLMRModelForRetrieval, FLMRQueryEncoderTokenizer, FLMRContextEncoderTokenizer
checkpoint_path = "LinWeizheDragon/ColBERT-v2"
query_tokenizer = FLMRQueryEncoderTokenizer.from_pretrained(checkpoint_path, subfolder="query_tokenizer")
context_tokenizer = FLMRContextEncoderTokenizer.from_pretrained(checkpoint_path, subfolder="context_tokenizer")
model = FLMRModelForRetrieval.from_pretrained(checkpoint_path,
query_tokenizer=query_tokenizer,
context_tokenizer=context_tokenizer,
)
Q_encoding = query_tokenizer(["What is the capital of France?", "What is the capital of China?"])
D_encoding = context_tokenizer(["Paris is the capital of France.", "Beijing is the capital of China.",
"Paris is the capital of France.", "Beijing is the capital of China."])
inputs = dict(
query_input_ids=Q_encoding['input_ids'],
query_attention_mask=Q_encoding['attention_mask'],
context_input_ids=D_encoding['input_ids'],
context_attention_mask=D_encoding['attention_mask'],
use_in_batch_negatives=True,
)
res = model.forward(**inputs)
```
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
The model was trained on [MS MARCO Passage Ranking](https://github.com/microsoft/MSMARCO-Passage-Ranking)
More details can be found in the [official ColBERT repository](https://github.com/stanford-futuredata/ColBERT?tab=readme-ov-file)
### Training Procedure
The training details can be found in the [official ColBERT repository](https://github.com/stanford-futuredata/ColBERT?tab=readme-ov-file) and the [ColBERTv2 paper](https://arxiv.org/abs/2112.01488)
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
The model is evaluated on a range of retrieval tasks, such as:
- [MS MARCO Passage Ranking](https://github.com/microsoft/MSMARCO-Passage-Ranking)
- [BEIR](https://arxiv.org/abs/2104.08663)
- [LoTTE](https://github.com/stanford-futuredata/ColBERT/blob/main/LoTTE.md)
- [OOD Wikipedia Open QA](https://arxiv.org/abs/2112.01488)
### Results
Find the evaluation results in the [ColBERTv2 paper](https://arxiv.org/abs/2112.01488)
## Citation
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
Please cite the original papers:
```
@inproceedings{10.1145/3397271.3401075,
author = {Khattab, Omar and Zaharia, Matei},
title = {ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT},
year = {2020},
isbn = {9781450380164},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3397271.3401075},
doi = {10.1145/3397271.3401075},
abstract = {Recent progress in Natural Language Understanding (NLU) is driving fast-paced advances in Information Retrieval (IR), largely owed to fine-tuning deep language models (LMs) for document ranking. While remarkably effective, the ranking models based on these LMs increase computational cost by orders of magnitude over prior approaches, particularly as they must feed each query-document pair through a massive neural network to compute a single relevance score. To tackle this, we present ColBERT, a novel ranking model that adapts deep LMs (in particular, BERT) for efficient retrieval. ColBERT introduces a late interaction architecture that independently encodes the query and the document using BERT and then employs a cheap yet powerful interaction step that models their fine-grained similarity. By delaying and yet retaining this fine-granular interaction, ColBERT can leverage the expressiveness of deep LMs while simultaneously gaining the ability to pre-compute document representations offline, considerably speeding up query processing. Crucially, ColBERT's pruning-friendly interaction mechanism enables leveraging vector-similarity indexes for end-to-end retrieval directly from millions of documents. We extensively evaluate ColBERT using two recent passage search datasets. Results show that ColBERT's effectiveness is competitive with existing BERT-based models (and outperforms every non-BERT baseline), while executing two orders-of-magnitude faster and requiring up to four orders-of-magnitude fewer FLOPs per query.},
booktitle = {Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval},
pages = {39–48},
numpages = {10},
keywords = {bert, deep language models, efficiency, neural ir},
location = {Virtual Event, China},
series = {SIGIR '20}
}
@inproceedings{santhanam-etal-2022-colbertv2,
title = "{C}ol{BERT}v2: Effective and Efficient Retrieval via Lightweight Late Interaction",
author = "Santhanam, Keshav and
Khattab, Omar and
Saad-Falcon, Jon and
Potts, Christopher and
Zaharia, Matei",
editor = "Carpuat, Marine and
de Marneffe, Marie-Catherine and
Meza Ruiz, Ivan Vladimir",
booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jul,
year = "2022",
address = "Seattle, United States",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.naacl-main.272",
doi = "10.18653/v1/2022.naacl-main.272",
pages = "3715--3734",
abstract = "Neural information retrieval (IR) has greatly advanced search and other knowledge-intensive language tasks. While many neural IR methods encode queries and documents into single-vector representations, late interaction models produce multi-vector representations at the granularity of each token and decompose relevance modeling into scalable token-level computations. This decomposition has been shown to make late interaction more effective, but it inflates the space footprint of these models by an order of magnitude. In this work, we introduce ColBERTv2, a retriever that couples an aggressive residual compression mechanism with a denoised supervision strategy to simultaneously improve the quality and space footprint of late interaction. We evaluate ColBERTv2 across a wide range of benchmarks, establishing state-of-the-art quality within and outside the training domain while reducing the space footprint of late interaction models by 6{--}10x.",
}
@inproceedings{10.1145/3511808.3557325,
author = {Santhanam, Keshav and Khattab, Omar and Potts, Christopher and Zaharia, Matei},
title = {PLAID: An Efficient Engine for Late Interaction Retrieval},
year = {2022},
isbn = {9781450392365},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3511808.3557325},
doi = {10.1145/3511808.3557325},
abstract = {Pre-trained language models are increasingly important components across multiple information retrieval (IR) paradigms. Late interaction, introduced with the ColBERT model and recently refined in ColBERTv2, is a popular paradigm that holds state-of-the-art status across many benchmarks. To dramatically speed up the search latency of late interaction, we introduce the Performance-optimized Late Interaction Driver (PLAID) engine. Without impacting quality, PLAID swiftly eliminates low-scoring passages using a novel centroid interaction mechanism that treats every passage as a lightweight bag of centroids. PLAID uses centroid interaction as well as centroid pruning, a mechanism for sparsifying the bag of centroids, within a highly-optimized engine to reduce late interaction search latency by up to 7x on a GPU and 45x on a CPU against vanilla ColBERTv2, while continuing to deliver state-of-the-art retrieval quality. This allows the PLAID engine with ColBERTv2 to achieve latency of tens of milliseconds on a GPU and tens or just few hundreds of milliseconds on a CPU at large scale, even at the largest scales we evaluate with 140M passages.},
booktitle = {Proceedings of the 31st ACM International Conference on Information \& Knowledge Management},
pages = {1747–1756},
numpages = {10},
keywords = {colbert, dynamic pruning, efficient search, late interaction, neural information retrieval},
location = {Atlanta, GA, USA},
series = {CIKM '22}
}
```
If the implementation helps your research, please cite:
```
@article{Lin_Mei_Chen_Byrne_2024,
title={PreFLMR: Scaling Up Fine-Grained Late-Interaction Multi-modal Retrievers},
url={http://arxiv.org/abs/2402.08327},
number={arXiv:2402.08327},
publisher={arXiv},
author={Lin, Weizhe and Mei, Jingbiao and Chen, Jinghong and Byrne, Bill},
year={2024}}
``` |
laharikayl/my-pet-dog | laharikayl | 2024-02-27T05:18:18Z | 0 | 0 | null | [
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2024-02-27T05:13:02Z | ---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Pet-Dog Dreambooth model trained by laharikayl following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: GoX19932gAS
Sample pictures of this concept:

|
Dangurangu/LaBSE-masakhane-news-finetuned-shona | Dangurangu | 2024-02-27T05:16:10Z | 1 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"bert",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] | text-classification | 2024-02-27T05:05:04Z | ---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# LaBSE-masakhane-news-finetuned-shona
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("LaBSE-masakhane-news-finetuned-shona")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
DimalChathuranga/bert-finetuned-ner | DimalChathuranga | 2024-02-27T05:10:00Z | 106 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-02-27T04:38:26Z | ---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0563
- Precision: 0.9290
- Recall: 0.9487
- F1: 0.9387
- Accuracy: 0.9863
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0776 | 1.0 | 1756 | 0.0800 | 0.9070 | 0.9310 | 0.9189 | 0.9790 |
| 0.0399 | 2.0 | 3512 | 0.0577 | 0.9239 | 0.9458 | 0.9347 | 0.9849 |
| 0.0264 | 3.0 | 5268 | 0.0563 | 0.9290 | 0.9487 | 0.9387 | 0.9863 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.1.0+cu121
- Tokenizers 0.15.2
|
liminerity/mm4.5-slerp-3b | liminerity | 2024-02-27T05:02:59Z | 110 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"liminerity/mm4-3b",
"liminerity/mm3-slerp-3b-fine-tuned",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-27T05:01:14Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- liminerity/mm4-3b
- liminerity/mm3-slerp-3b-fine-tuned
---
# mm4.5-slerp-3b
mm4.5-slerp-3b is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):
* [liminerity/mm4-3b](https://huggingface.co/liminerity/mm4-3b)
* [liminerity/mm3-slerp-3b-fine-tuned](https://huggingface.co/liminerity/mm3-slerp-3b-fine-tuned)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: liminerity/mm4-3b
layer_range: [0, 40]
- model: liminerity/mm3-slerp-3b-fine-tuned
layer_range: [0, 40]
merge_method: slerp
base_model: liminerity/mm4-3b
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: float16
``` |
pankaj-munde/toy_peft_model-new | pankaj-munde | 2024-02-27T04:42:56Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-02-27T04:42:47Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mins0o0/transforemr_16 | mins0o0 | 2024-02-27T04:26:42Z | 98 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-02-27T04:26:10Z | ---
license: apache-2.0
base_model: t5-small
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: transforemr_16
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# transforemr_16
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4841
- Bleu: 8.6082
- Gen Len: 17.5647
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:-------:|
| 1.8767 | 1.0 | 6355 | 1.6443 | 7.2826 | 17.6214 |
| 1.7864 | 2.0 | 12710 | 1.5863 | 7.7743 | 17.5883 |
| 1.7465 | 3.0 | 19065 | 1.5544 | 8.0399 | 17.5689 |
| 1.7034 | 4.0 | 25420 | 1.5304 | 8.1983 | 17.5708 |
| 1.6912 | 5.0 | 31775 | 1.5148 | 8.3483 | 17.5603 |
| 1.6652 | 6.0 | 38130 | 1.5022 | 8.4549 | 17.5658 |
| 1.6534 | 7.0 | 44485 | 1.4951 | 8.5235 | 17.563 |
| 1.6615 | 8.0 | 50840 | 1.4884 | 8.562 | 17.5624 |
| 1.6426 | 9.0 | 57195 | 1.4854 | 8.5932 | 17.5643 |
| 1.6451 | 10.0 | 63550 | 1.4841 | 8.6082 | 17.5647 |
### Framework versions
- Transformers 4.38.0.dev0
- Pytorch 2.2.0
- Datasets 2.17.1
- Tokenizers 0.15.2
|
hadsag/albert-base-v2-finetuned-squad-v2 | hadsag | 2024-02-27T04:20:31Z | 132 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"question-answering",
"generated_from_trainer",
"base_model:hadsag/TinyBERT_General_4L_312D-finetuned-squad-v2",
"base_model:finetune:hadsag/TinyBERT_General_4L_312D-finetuned-squad-v2",
"endpoints_compatible",
"region:us"
] | question-answering | 2024-02-26T12:25:14Z | ---
base_model: hadsag/TinyBERT_General_4L_312D-finetuned-squad
tags:
- generated_from_trainer
model-index:
- name: albert-base-v2-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# albert-base-v2-finetuned-squad
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.37.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
|
neerajnarwal/Llama-2-7b-chat-Question-Answering | neerajnarwal | 2024-02-27T04:19:58Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"base_model:adapter:meta-llama/Llama-2-7b-chat-hf",
"region:us"
] | null | 2024-02-27T04:18:39Z | ---
library_name: peft
base_model: meta-llama/Llama-2-7b-chat-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.8.2 |
Fhermin/ppo-SnowballTarget | Fhermin | 2024-02-27T04:15:13Z | 3 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] | reinforcement-learning | 2024-02-27T04:15:07Z | ---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Fhermin/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
GraphWiz/Mistral-7B-RFT | GraphWiz | 2024-02-27T04:13:53Z | 2 | 0 | transformers | [
"transformers",
"pytorch",
"mistral",
"text-generation",
"graph problem",
"dataset:GraphWiz/GraphInstruct-RFT-72K",
"arxiv:2402.16029",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-18T11:59:18Z | ---
license: apache-2.0
datasets:
- GraphWiz/GraphInstruct-RFT-72K
metrics:
- accuracy
pipeline_tag: text-generation
tags:
- graph problem
---
# GraphWiz
Project Page: [https://graph-wiz.github.io/](https://graph-wiz.github.io/)
Paper: [https://arxiv.org/abs/2402.16029.pdf](https://arxiv.org/abs/2402.16029)
Code: [https://github.com/nuochenpku/Graph-Reasoning-LLM](https://github.com/nuochenpku/Graph-Reasoning-LLM)
GraphWiz is a powerful instruction-following LLM that can map textural descriptions of graphs and structures, and then solve different graph problems explicitly in natural language.
Training strategies include two stages: **Mixed-task Training** and **DPO Alignment**.
## Results
| *Models* | **Cycle** | **Connect** | **Bipartite** | **Topology** | **Shortest** | **Triangle** | **Flow** | **Hamilton** | **Subgraph** | **Average** |
|:-------------------------------------:|:-------------------------------:|:--------------------------------:|:----------------------------------:|:---------------------------------:|:---------------------------------:|:---------------------------------:|:-----------------------------:|:---------------------------------:|:---------------------------------:|:--------------------------------------:|
| *In-Context Learning* |||||||||||
| **GPT-4 (zero-shot)** | 38.75 | 17.00 | 65.25 | 5.00 | 9.25 | 5.75 | 3.25 | 59.25 | 45.50 | 27.67 |
| **GhatGPT (2-shot)** | 51.25 | 43.75 | 70.75 | 4.50 | 3.50 | 17.25 | 8.50 | 54.25 | 43.00 | 32.97 |
| **GPT-4 (2-shot)** | 52.50 | 62.75 | 74.25 | 25.25 | 18.25 | 31.00 | 7.75 | {75.75} | 46.75 | 43.81 |
| *Mistral-7B* |||||||||||
| **Naive SFT** | 73.75 | 83.50 | 78.50 | 1.00 | 23.00 | 47.00 | 28.75 | 31.75 | 41.25 | 46.56 |
| **GraphWiz** | **92.00** | **89.50** | 72.00 | 19.00 | **31.25** | 38.75 | 29.25 | 26.50 | **85.50** | 53.75 |
| **GraphWiz-DPO** | 85.50 | 79.50 | **85.50** | **85.25** | 12.50 | 29.00 | 35.50 | 62.75 | 48.50 | 58.22 |
| *LLaMA 2-7B* |||||||||||
| **Naive SFT** | 73.75 | 83.50 | 41.25 | 4.00 | 9.50 | 30.00 | 16.50 | 69.00 | 75.45 | 44.81 |
| **GraphWiz** | 91.50 | 87.00 | 74.00 | 18.00 | **28.00** | 38.25 | 24.50 | 52.25 | **82.25** | 55.08 |
| **GraphWiz-DPO** | 89.00 | 82.50 | 84.75 | 46.75 | 24.00 | **52.75** | **43.50** | **81.50** | 77.25 | **65.00** |
| *LLaMA 2-13B* |||||||||||
| **Naive SFT** | 73.75 | 83.75 | 59.00 | 0.50 | 11.75 | 34.75 | 24.25 | 59.75 | 54.75 | 44.69 |
| **GraphWiz** | **94.75** | 87.00 | 78.00 | 28.00 | 27.75 | 36.00 | 24.50 | 59.00 | 81.50 | 57.39 |
| **GraphWiz-DPO** | 87.50 | **88.50** | **88.25** | **72.75** | 22.00 | **48.75** | **43.75** | 46.50 | 77.00 | **63.89** |
## Examples
```
G-Q: Determine whether or not there is a cycle in an undirected graph. In an undirected graph..,the
nodes are numbered from 0 to 88, and the edges are: (0, 73) (0, 51) (0, 10) (0, 63) (0, 28) (1, 62) (1, 57) (1, 84) (1, 61) (1, 5)
(1, 24) (2, 84) (2, 3) (2, 66) (2, 68) (2, 17) (2, 35) (2, 34) (2, 15) (3, 39) (3, 52) (3, 16) (3, 15) (3, 8) (4, 69) (4, 85)
(4, 36) (4, 72) (5, 44) (6, 77) (6, 7) (7, 85) (8, 64) (8, 23) (8, 28) (9, 34) (9, 31) (9, 61) (9, 28) (10, 26) (11, 37) (11, 39)
(11, 19) (11, 64) (13, 73) (13, 61) (13, 80) (13, 85) (14, 86) (14, 59) (14, 32) (14, 58) (14, 85) (14, 66) (15, 43) (15, 48) (15, 73)
(15, 19) (15, 47) (15, 68) (16, 46) (16, 60) (16, 84) (17, 44) (17, 72) (17, 36) (17, 37) (17, 61) (18, 20) (18, 24) (18, 22) (18, 41)
(19, 45) (19, 83) (20, 25) (20, 29) (21, 38) (21, 64) (21, 24) (21, 22) (21, 34) (22, 23) (23, 34) (23, 30) (23, 83) (24, 47) (24, 50)
(25, 59) (25, 42) (25, 70) (25, 72) (26, 45) (26, 30) (26, 87) (26, 80) (26, 50) (27, 77) (27, 58) (27, 60) (27, 29) (28, 36) (28, 59)
(28, 88) (29, 31) (29, 33) (29, 56) (29, 55) (29, 78) (29, 73) (29, 82) (30, 34) (31, 58) (31, 70) (32, 78) (33, 73) (33, 68) (34, 48)
(34, 88) (34, 51) (34, 66) (34, 54) (35, 43) (35, 56) (36, 44) (36, 39) (37, 57) (37, 71) (38, 47) (38, 58) (38, 88) (39, 74) (39, 48)
(39, 45) (39, 42) (40, 60) (40, 59) (40, 71) (40, 76) (41, 79) (42, 60) (42, 62) (42, 72) (43, 57) (44, 47) (44, 74) (44, 75) (44, 82)
(44, 70) (45, 68) (45, 67) (45, 71) (46, 84) (47, 56) (48, 75) (49, 71) (49, 58) (50, 76) (51, 55) (51, 87) (51, 79) (52, 78) (56, 69)
(56, 83) (56, 79) (57, 76) (57, 70) (57, 72) (57, 78) (59, 70) (60, 80) (60, 76) (60, 77) (60, 75) (61, 74) (61, 77) (62, 84) (62, 66)
(66, 69) (68, 88) (70, 88) (70, 76) (71, 81) (71, 72) (71, 86) (72, 75) (73, 82) (73, 83) (74, 85). Is there a cycle in this graph?
GraphWiz:
There are many possible ways to find a cycle in this graph, but one example is: Starting from node 0, we can go to node 73 (via edge 0-73), then to node 82 (via edge 73-82),
then to node 44 (via edge 82-44), then to node 17 (via edge 44-17), then to node 36 (via edge 17-36), then to node 28 (via edge 36-28),
and back to node 0 (via edge 28-0). This forms a cycle [0-73-82-44-17-36-28-0] without revisiting any edge. So, there is a cycle in this graph.
```
|
GraphWiz/LLaMA2-7B-DPO | GraphWiz | 2024-02-27T04:12:58Z | 14 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"graph problem",
"dataset:GraphWiz/GraphInstruct-RFT-72K",
"arxiv:2402.16029",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-23T08:27:06Z | ---
license: apache-2.0
datasets:
- GraphWiz/GraphInstruct-RFT-72K
metrics:
- accuracy
pipeline_tag: text-generation
tags:
- graph problem
---
# GraphWiz
Project Page: [https://graph-wiz.github.io/](https://graph-wiz.github.io/)
Paper: [https://arxiv.org/abs/2402.16029.pdf](https://arxiv.org/abs/2402.16029)
Code: [https://github.com/nuochenpku/Graph-Reasoning-LLM](https://github.com/nuochenpku/Graph-Reasoning-LLM)
GraphWiz is a powerful instruction-following LLM that can map textural descriptions of graphs and structures, and then solve different graph problems explicitly in natural language.
Training strategies include two stages: **Mixed-task Training** and **DPO Alignment**.
## Results
| *Models* | **Cycle** | **Connect** | **Bipartite** | **Topology** | **Shortest** | **Triangle** | **Flow** | **Hamilton** | **Subgraph** | **Average** |
|:-------------------------------------:|:-------------------------------:|:--------------------------------:|:----------------------------------:|:---------------------------------:|:---------------------------------:|:---------------------------------:|:-----------------------------:|:---------------------------------:|:---------------------------------:|:--------------------------------------:|
| *In-Context Learning* |||||||||||
| **GPT-4 (zero-shot)** | 38.75 | 17.00 | 65.25 | 5.00 | 9.25 | 5.75 | 3.25 | 59.25 | 45.50 | 27.67 |
| **GhatGPT (2-shot)** | 51.25 | 43.75 | 70.75 | 4.50 | 3.50 | 17.25 | 8.50 | 54.25 | 43.00 | 32.97 |
| **GPT-4 (2-shot)** | 52.50 | 62.75 | 74.25 | 25.25 | 18.25 | 31.00 | 7.75 | {75.75} | 46.75 | 43.81 |
| *Mistral-7B* |||||||||||
| **Naive SFT** | 73.75 | 83.50 | 78.50 | 1.00 | 23.00 | 47.00 | 28.75 | 31.75 | 41.25 | 46.56 |
| **GraphWiz** | **92.00** | **89.50** | 72.00 | 19.00 | **31.25** | 38.75 | 29.25 | 26.50 | **85.50** | 53.75 |
| **GraphWiz-DPO** | 85.50 | 79.50 | **85.50** | **85.25** | 12.50 | 29.00 | 35.50 | 62.75 | 48.50 | 58.22 |
| *LLaMA 2-7B* |||||||||||
| **Naive SFT** | 73.75 | 83.50 | 41.25 | 4.00 | 9.50 | 30.00 | 16.50 | 69.00 | 75.45 | 44.81 |
| **GraphWiz** | 91.50 | 87.00 | 74.00 | 18.00 | **28.00** | 38.25 | 24.50 | 52.25 | **82.25** | 55.08 |
| **GraphWiz-DPO** | 89.00 | 82.50 | 84.75 | 46.75 | 24.00 | **52.75** | **43.50** | **81.50** | 77.25 | **65.00** |
| *LLaMA 2-13B* |||||||||||
| **Naive SFT** | 73.75 | 83.75 | 59.00 | 0.50 | 11.75 | 34.75 | 24.25 | 59.75 | 54.75 | 44.69 |
| **GraphWiz** | **94.75** | 87.00 | 78.00 | 28.00 | 27.75 | 36.00 | 24.50 | 59.00 | 81.50 | 57.39 |
| **GraphWiz-DPO** | 87.50 | **88.50** | **88.25** | **72.75** | 22.00 | **48.75** | **43.75** | 46.50 | 77.00 | **63.89** |
## Examples
```
G-Q: Determine whether or not there is a cycle in an undirected graph. In an undirected graph..,the
nodes are numbered from 0 to 88, and the edges are: (0, 73) (0, 51) (0, 10) (0, 63) (0, 28) (1, 62) (1, 57) (1, 84) (1, 61) (1, 5)
(1, 24) (2, 84) (2, 3) (2, 66) (2, 68) (2, 17) (2, 35) (2, 34) (2, 15) (3, 39) (3, 52) (3, 16) (3, 15) (3, 8) (4, 69) (4, 85)
(4, 36) (4, 72) (5, 44) (6, 77) (6, 7) (7, 85) (8, 64) (8, 23) (8, 28) (9, 34) (9, 31) (9, 61) (9, 28) (10, 26) (11, 37) (11, 39)
(11, 19) (11, 64) (13, 73) (13, 61) (13, 80) (13, 85) (14, 86) (14, 59) (14, 32) (14, 58) (14, 85) (14, 66) (15, 43) (15, 48) (15, 73)
(15, 19) (15, 47) (15, 68) (16, 46) (16, 60) (16, 84) (17, 44) (17, 72) (17, 36) (17, 37) (17, 61) (18, 20) (18, 24) (18, 22) (18, 41)
(19, 45) (19, 83) (20, 25) (20, 29) (21, 38) (21, 64) (21, 24) (21, 22) (21, 34) (22, 23) (23, 34) (23, 30) (23, 83) (24, 47) (24, 50)
(25, 59) (25, 42) (25, 70) (25, 72) (26, 45) (26, 30) (26, 87) (26, 80) (26, 50) (27, 77) (27, 58) (27, 60) (27, 29) (28, 36) (28, 59)
(28, 88) (29, 31) (29, 33) (29, 56) (29, 55) (29, 78) (29, 73) (29, 82) (30, 34) (31, 58) (31, 70) (32, 78) (33, 73) (33, 68) (34, 48)
(34, 88) (34, 51) (34, 66) (34, 54) (35, 43) (35, 56) (36, 44) (36, 39) (37, 57) (37, 71) (38, 47) (38, 58) (38, 88) (39, 74) (39, 48)
(39, 45) (39, 42) (40, 60) (40, 59) (40, 71) (40, 76) (41, 79) (42, 60) (42, 62) (42, 72) (43, 57) (44, 47) (44, 74) (44, 75) (44, 82)
(44, 70) (45, 68) (45, 67) (45, 71) (46, 84) (47, 56) (48, 75) (49, 71) (49, 58) (50, 76) (51, 55) (51, 87) (51, 79) (52, 78) (56, 69)
(56, 83) (56, 79) (57, 76) (57, 70) (57, 72) (57, 78) (59, 70) (60, 80) (60, 76) (60, 77) (60, 75) (61, 74) (61, 77) (62, 84) (62, 66)
(66, 69) (68, 88) (70, 88) (70, 76) (71, 81) (71, 72) (71, 86) (72, 75) (73, 82) (73, 83) (74, 85). Is there a cycle in this graph?
GraphWiz:
There are many possible ways to find a cycle in this graph, but one example is: Starting from node 0, we can go to node 73 (via edge 0-73), then to node 82 (via edge 73-82),
then to node 44 (via edge 82-44), then to node 17 (via edge 44-17), then to node 36 (via edge 17-36), then to node 28 (via edge 36-28),
and back to node 0 (via edge 28-0). This forms a cycle [0-73-82-44-17-36-28-0] without revisiting any edge. So, there is a cycle in this graph.
```
|
GraphWiz/LLaMA2-13B-RFT | GraphWiz | 2024-02-27T04:12:41Z | 8 | 2 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"graph problem",
"dataset:GraphWiz/GraphInstruct-RFT-72K",
"arxiv:2402.16029",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-23T08:27:46Z | ---
license: apache-2.0
datasets:
- GraphWiz/GraphInstruct-RFT-72K
metrics:
- accuracy
pipeline_tag: text-generation
tags:
- graph problem
---
# GraphWiz
Project Page: [https://graph-wiz.github.io/](https://graph-wiz.github.io/)
Paper: [https://arxiv.org/abs/2402.16029.pdf](https://arxiv.org/abs/2402.16029)
Code: [https://github.com/nuochenpku/Graph-Reasoning-LLM](https://github.com/nuochenpku/Graph-Reasoning-LLM)
GraphWiz is a powerful instruction-following LLM that can map textural descriptions of graphs and structures, and then solve different graph problems explicitly in natural language.
Training strategies include two stages: **Mixed-task Training** and **DPO Alignment**.
## Results
| *Models* | **Cycle** | **Connect** | **Bipartite** | **Topology** | **Shortest** | **Triangle** | **Flow** | **Hamilton** | **Subgraph** | **Average** |
|:-------------------------------------:|:-------------------------------:|:--------------------------------:|:----------------------------------:|:---------------------------------:|:---------------------------------:|:---------------------------------:|:-----------------------------:|:---------------------------------:|:---------------------------------:|:--------------------------------------:|
| *In-Context Learning* |||||||||||
| **GPT-4 (zero-shot)** | 38.75 | 17.00 | 65.25 | 5.00 | 9.25 | 5.75 | 3.25 | 59.25 | 45.50 | 27.67 |
| **GhatGPT (2-shot)** | 51.25 | 43.75 | 70.75 | 4.50 | 3.50 | 17.25 | 8.50 | 54.25 | 43.00 | 32.97 |
| **GPT-4 (2-shot)** | 52.50 | 62.75 | 74.25 | 25.25 | 18.25 | 31.00 | 7.75 | {75.75} | 46.75 | 43.81 |
| *Mistral-7B* |||||||||||
| **Naive SFT** | 73.75 | 83.50 | 78.50 | 1.00 | 23.00 | 47.00 | 28.75 | 31.75 | 41.25 | 46.56 |
| **GraphWiz** | **92.00** | **89.50** | 72.00 | 19.00 | **31.25** | 38.75 | 29.25 | 26.50 | **85.50** | 53.75 |
| **GraphWiz-DPO** | 85.50 | 79.50 | **85.50** | **85.25** | 12.50 | 29.00 | 35.50 | 62.75 | 48.50 | 58.22 |
| *LLaMA 2-7B* |||||||||||
| **Naive SFT** | 73.75 | 83.50 | 41.25 | 4.00 | 9.50 | 30.00 | 16.50 | 69.00 | 75.45 | 44.81 |
| **GraphWiz** | 91.50 | 87.00 | 74.00 | 18.00 | **28.00** | 38.25 | 24.50 | 52.25 | **82.25** | 55.08 |
| **GraphWiz-DPO** | 89.00 | 82.50 | 84.75 | 46.75 | 24.00 | **52.75** | **43.50** | **81.50** | 77.25 | **65.00** |
| *LLaMA 2-13B* |||||||||||
| **Naive SFT** | 73.75 | 83.75 | 59.00 | 0.50 | 11.75 | 34.75 | 24.25 | 59.75 | 54.75 | 44.69 |
| **GraphWiz** | **94.75** | 87.00 | 78.00 | 28.00 | 27.75 | 36.00 | 24.50 | 59.00 | 81.50 | 57.39 |
| **GraphWiz-DPO** | 87.50 | **88.50** | **88.25** | **72.75** | 22.00 | **48.75** | **43.75** | 46.50 | 77.00 | **63.89** |
## Examples
```
G-Q: Determine whether or not there is a cycle in an undirected graph. In an undirected graph..,the
nodes are numbered from 0 to 88, and the edges are: (0, 73) (0, 51) (0, 10) (0, 63) (0, 28) (1, 62) (1, 57) (1, 84) (1, 61) (1, 5)
(1, 24) (2, 84) (2, 3) (2, 66) (2, 68) (2, 17) (2, 35) (2, 34) (2, 15) (3, 39) (3, 52) (3, 16) (3, 15) (3, 8) (4, 69) (4, 85)
(4, 36) (4, 72) (5, 44) (6, 77) (6, 7) (7, 85) (8, 64) (8, 23) (8, 28) (9, 34) (9, 31) (9, 61) (9, 28) (10, 26) (11, 37) (11, 39)
(11, 19) (11, 64) (13, 73) (13, 61) (13, 80) (13, 85) (14, 86) (14, 59) (14, 32) (14, 58) (14, 85) (14, 66) (15, 43) (15, 48) (15, 73)
(15, 19) (15, 47) (15, 68) (16, 46) (16, 60) (16, 84) (17, 44) (17, 72) (17, 36) (17, 37) (17, 61) (18, 20) (18, 24) (18, 22) (18, 41)
(19, 45) (19, 83) (20, 25) (20, 29) (21, 38) (21, 64) (21, 24) (21, 22) (21, 34) (22, 23) (23, 34) (23, 30) (23, 83) (24, 47) (24, 50)
(25, 59) (25, 42) (25, 70) (25, 72) (26, 45) (26, 30) (26, 87) (26, 80) (26, 50) (27, 77) (27, 58) (27, 60) (27, 29) (28, 36) (28, 59)
(28, 88) (29, 31) (29, 33) (29, 56) (29, 55) (29, 78) (29, 73) (29, 82) (30, 34) (31, 58) (31, 70) (32, 78) (33, 73) (33, 68) (34, 48)
(34, 88) (34, 51) (34, 66) (34, 54) (35, 43) (35, 56) (36, 44) (36, 39) (37, 57) (37, 71) (38, 47) (38, 58) (38, 88) (39, 74) (39, 48)
(39, 45) (39, 42) (40, 60) (40, 59) (40, 71) (40, 76) (41, 79) (42, 60) (42, 62) (42, 72) (43, 57) (44, 47) (44, 74) (44, 75) (44, 82)
(44, 70) (45, 68) (45, 67) (45, 71) (46, 84) (47, 56) (48, 75) (49, 71) (49, 58) (50, 76) (51, 55) (51, 87) (51, 79) (52, 78) (56, 69)
(56, 83) (56, 79) (57, 76) (57, 70) (57, 72) (57, 78) (59, 70) (60, 80) (60, 76) (60, 77) (60, 75) (61, 74) (61, 77) (62, 84) (62, 66)
(66, 69) (68, 88) (70, 88) (70, 76) (71, 81) (71, 72) (71, 86) (72, 75) (73, 82) (73, 83) (74, 85). Is there a cycle in this graph?
GraphWiz:
There are many possible ways to find a cycle in this graph, but one example is: Starting from node 0, we can go to node 73 (via edge 0-73), then to node 82 (via edge 73-82),
then to node 44 (via edge 82-44), then to node 17 (via edge 44-17), then to node 36 (via edge 17-36), then to node 28 (via edge 36-28),
and back to node 0 (via edge 28-0). This forms a cycle [0-73-82-44-17-36-28-0] without revisiting any edge. So, there is a cycle in this graph.
```
|
GraphWiz/LLaMA2-7B-RFT | GraphWiz | 2024-02-27T04:12:25Z | 2 | 1 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"graph problem",
"dataset:GraphWiz/GraphInstruct-RFT-72K",
"arxiv:2402.16029",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-23T08:26:50Z | ---
license: apache-2.0
datasets:
- GraphWiz/GraphInstruct-RFT-72K
metrics:
- accuracy
pipeline_tag: text-generation
tags:
- graph problem
---
# GraphWiz
Project Page: [https://graph-wiz.github.io/](https://graph-wiz.github.io/)
Paper: [https://arxiv.org/abs/2402.16029.pdf](https://arxiv.org/abs/2402.16029)
Code: [https://github.com/nuochenpku/Graph-Reasoning-LLM](https://github.com/nuochenpku/Graph-Reasoning-LLM)
GraphWiz is a powerful instruction-following LLM that can map textural descriptions of graphs and structures, and then solve different graph problems explicitly in natural language.
Training strategies include two stages: **Mixed-task Training** and **DPO Alignment**.
## Results
| *Models* | **Cycle** | **Connect** | **Bipartite** | **Topology** | **Shortest** | **Triangle** | **Flow** | **Hamilton** | **Subgraph** | **Average** |
|:-------------------------------------:|:-------------------------------:|:--------------------------------:|:----------------------------------:|:---------------------------------:|:---------------------------------:|:---------------------------------:|:-----------------------------:|:---------------------------------:|:---------------------------------:|:--------------------------------------:|
| *In-Context Learning* |||||||||||
| **GPT-4 (zero-shot)** | 38.75 | 17.00 | 65.25 | 5.00 | 9.25 | 5.75 | 3.25 | 59.25 | 45.50 | 27.67 |
| **GhatGPT (2-shot)** | 51.25 | 43.75 | 70.75 | 4.50 | 3.50 | 17.25 | 8.50 | 54.25 | 43.00 | 32.97 |
| **GPT-4 (2-shot)** | 52.50 | 62.75 | 74.25 | 25.25 | 18.25 | 31.00 | 7.75 | {75.75} | 46.75 | 43.81 |
| *Mistral-7B* |||||||||||
| **Naive SFT** | 73.75 | 83.50 | 78.50 | 1.00 | 23.00 | 47.00 | 28.75 | 31.75 | 41.25 | 46.56 |
| **GraphWiz** | **92.00** | **89.50** | 72.00 | 19.00 | **31.25** | 38.75 | 29.25 | 26.50 | **85.50** | 53.75 |
| **GraphWiz-DPO** | 85.50 | 79.50 | **85.50** | **85.25** | 12.50 | 29.00 | 35.50 | 62.75 | 48.50 | 58.22 |
| *LLaMA 2-7B* |||||||||||
| **Naive SFT** | 73.75 | 83.50 | 41.25 | 4.00 | 9.50 | 30.00 | 16.50 | 69.00 | 75.45 | 44.81 |
| **GraphWiz** | 91.50 | 87.00 | 74.00 | 18.00 | **28.00** | 38.25 | 24.50 | 52.25 | **82.25** | 55.08 |
| **GraphWiz-DPO** | 89.00 | 82.50 | 84.75 | 46.75 | 24.00 | **52.75** | **43.50** | **81.50** | 77.25 | **65.00** |
| *LLaMA 2-13B* |||||||||||
| **Naive SFT** | 73.75 | 83.75 | 59.00 | 0.50 | 11.75 | 34.75 | 24.25 | 59.75 | 54.75 | 44.69 |
| **GraphWiz** | **94.75** | 87.00 | 78.00 | 28.00 | 27.75 | 36.00 | 24.50 | 59.00 | 81.50 | 57.39 |
| **GraphWiz-DPO** | 87.50 | **88.50** | **88.25** | **72.75** | 22.00 | **48.75** | **43.75** | 46.50 | 77.00 | **63.89** |
## Examples
```
G-Q: Determine whether or not there is a cycle in an undirected graph. In an undirected graph..,the
nodes are numbered from 0 to 88, and the edges are: (0, 73) (0, 51) (0, 10) (0, 63) (0, 28) (1, 62) (1, 57) (1, 84) (1, 61) (1, 5)
(1, 24) (2, 84) (2, 3) (2, 66) (2, 68) (2, 17) (2, 35) (2, 34) (2, 15) (3, 39) (3, 52) (3, 16) (3, 15) (3, 8) (4, 69) (4, 85)
(4, 36) (4, 72) (5, 44) (6, 77) (6, 7) (7, 85) (8, 64) (8, 23) (8, 28) (9, 34) (9, 31) (9, 61) (9, 28) (10, 26) (11, 37) (11, 39)
(11, 19) (11, 64) (13, 73) (13, 61) (13, 80) (13, 85) (14, 86) (14, 59) (14, 32) (14, 58) (14, 85) (14, 66) (15, 43) (15, 48) (15, 73)
(15, 19) (15, 47) (15, 68) (16, 46) (16, 60) (16, 84) (17, 44) (17, 72) (17, 36) (17, 37) (17, 61) (18, 20) (18, 24) (18, 22) (18, 41)
(19, 45) (19, 83) (20, 25) (20, 29) (21, 38) (21, 64) (21, 24) (21, 22) (21, 34) (22, 23) (23, 34) (23, 30) (23, 83) (24, 47) (24, 50)
(25, 59) (25, 42) (25, 70) (25, 72) (26, 45) (26, 30) (26, 87) (26, 80) (26, 50) (27, 77) (27, 58) (27, 60) (27, 29) (28, 36) (28, 59)
(28, 88) (29, 31) (29, 33) (29, 56) (29, 55) (29, 78) (29, 73) (29, 82) (30, 34) (31, 58) (31, 70) (32, 78) (33, 73) (33, 68) (34, 48)
(34, 88) (34, 51) (34, 66) (34, 54) (35, 43) (35, 56) (36, 44) (36, 39) (37, 57) (37, 71) (38, 47) (38, 58) (38, 88) (39, 74) (39, 48)
(39, 45) (39, 42) (40, 60) (40, 59) (40, 71) (40, 76) (41, 79) (42, 60) (42, 62) (42, 72) (43, 57) (44, 47) (44, 74) (44, 75) (44, 82)
(44, 70) (45, 68) (45, 67) (45, 71) (46, 84) (47, 56) (48, 75) (49, 71) (49, 58) (50, 76) (51, 55) (51, 87) (51, 79) (52, 78) (56, 69)
(56, 83) (56, 79) (57, 76) (57, 70) (57, 72) (57, 78) (59, 70) (60, 80) (60, 76) (60, 77) (60, 75) (61, 74) (61, 77) (62, 84) (62, 66)
(66, 69) (68, 88) (70, 88) (70, 76) (71, 81) (71, 72) (71, 86) (72, 75) (73, 82) (73, 83) (74, 85). Is there a cycle in this graph?
GraphWiz:
There are many possible ways to find a cycle in this graph, but one example is: Starting from node 0, we can go to node 73 (via edge 0-73), then to node 82 (via edge 73-82),
then to node 44 (via edge 82-44), then to node 17 (via edge 44-17), then to node 36 (via edge 17-36), then to node 28 (via edge 36-28),
and back to node 0 (via edge 28-0). This forms a cycle [0-73-82-44-17-36-28-0] without revisiting any edge. So, there is a cycle in this graph.
```
|
GraphWiz/Mistral-7B | GraphWiz | 2024-02-27T04:11:24Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"mistral",
"text-generation",
"graph problem",
"dataset:GraphWiz/GraphInstruct-RFT-72K",
"arxiv:2402.16029",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-15T16:30:50Z | ---
license: apache-2.0
datasets:
- GraphWiz/GraphInstruct-RFT-72K
metrics:
- accuracy
pipeline_tag: text-generation
tags:
- graph problem
---
# GraphWiz
Project Page: [https://graph-wiz.github.io/](https://graph-wiz.github.io/)
Paper: [https://arxiv.org/abs/2402.16029.pdf](https://arxiv.org/abs/2402.16029)
Code: [https://github.com/nuochenpku/Graph-Reasoning-LLM](https://github.com/nuochenpku/Graph-Reasoning-LLM)
GraphWiz is a powerful instruction-following LLM that can map textural descriptions of graphs and structures, and then solve different graph problems explicitly in natural language.
Training strategies include two stages: **Mixed-task Training** and **DPO Alignment**.
## Results
| *Models* | **Cycle** | **Connect** | **Bipartite** | **Topology** | **Shortest** | **Triangle** | **Flow** | **Hamilton** | **Subgraph** | **Average** |
|:-------------------------------------:|:-------------------------------:|:--------------------------------:|:----------------------------------:|:---------------------------------:|:---------------------------------:|:---------------------------------:|:-----------------------------:|:---------------------------------:|:---------------------------------:|:--------------------------------------:|
| *In-Context Learning* |||||||||||
| **GPT-4 (zero-shot)** | 38.75 | 17.00 | 65.25 | 5.00 | 9.25 | 5.75 | 3.25 | 59.25 | 45.50 | 27.67 |
| **GhatGPT (2-shot)** | 51.25 | 43.75 | 70.75 | 4.50 | 3.50 | 17.25 | 8.50 | 54.25 | 43.00 | 32.97 |
| **GPT-4 (2-shot)** | 52.50 | 62.75 | 74.25 | 25.25 | 18.25 | 31.00 | 7.75 | {75.75} | 46.75 | 43.81 |
| *Mistral-7B* |||||||||||
| **Naive SFT** | 73.75 | 83.50 | 78.50 | 1.00 | 23.00 | 47.00 | 28.75 | 31.75 | 41.25 | 46.56 |
| **GraphWiz** | **92.00** | **89.50** | 72.00 | 19.00 | **31.25** | 38.75 | 29.25 | 26.50 | **85.50** | 53.75 |
| **GraphWiz-DPO** | 85.50 | 79.50 | **85.50** | **85.25** | 12.50 | 29.00 | 35.50 | 62.75 | 48.50 | 58.22 |
| *LLaMA 2-7B* |||||||||||
| **Naive SFT** | 73.75 | 83.50 | 41.25 | 4.00 | 9.50 | 30.00 | 16.50 | 69.00 | 75.45 | 44.81 |
| **GraphWiz** | 91.50 | 87.00 | 74.00 | 18.00 | **28.00** | 38.25 | 24.50 | 52.25 | **82.25** | 55.08 |
| **GraphWiz-DPO** | 89.00 | 82.50 | 84.75 | 46.75 | 24.00 | **52.75** | **43.50** | **81.50** | 77.25 | **65.00** |
| *LLaMA 2-13B* |||||||||||
| **Naive SFT** | 73.75 | 83.75 | 59.00 | 0.50 | 11.75 | 34.75 | 24.25 | 59.75 | 54.75 | 44.69 |
| **GraphWiz** | **94.75** | 87.00 | 78.00 | 28.00 | 27.75 | 36.00 | 24.50 | 59.00 | 81.50 | 57.39 |
| **GraphWiz-DPO** | 87.50 | **88.50** | **88.25** | **72.75** | 22.00 | **48.75** | **43.75** | 46.50 | 77.00 | **63.89** |
## Examples
```
G-Q: Determine whether or not there is a cycle in an undirected graph. In an undirected graph..,the
nodes are numbered from 0 to 88, and the edges are: (0, 73) (0, 51) (0, 10) (0, 63) (0, 28) (1, 62) (1, 57) (1, 84) (1, 61) (1, 5)
(1, 24) (2, 84) (2, 3) (2, 66) (2, 68) (2, 17) (2, 35) (2, 34) (2, 15) (3, 39) (3, 52) (3, 16) (3, 15) (3, 8) (4, 69) (4, 85)
(4, 36) (4, 72) (5, 44) (6, 77) (6, 7) (7, 85) (8, 64) (8, 23) (8, 28) (9, 34) (9, 31) (9, 61) (9, 28) (10, 26) (11, 37) (11, 39)
(11, 19) (11, 64) (13, 73) (13, 61) (13, 80) (13, 85) (14, 86) (14, 59) (14, 32) (14, 58) (14, 85) (14, 66) (15, 43) (15, 48) (15, 73)
(15, 19) (15, 47) (15, 68) (16, 46) (16, 60) (16, 84) (17, 44) (17, 72) (17, 36) (17, 37) (17, 61) (18, 20) (18, 24) (18, 22) (18, 41)
(19, 45) (19, 83) (20, 25) (20, 29) (21, 38) (21, 64) (21, 24) (21, 22) (21, 34) (22, 23) (23, 34) (23, 30) (23, 83) (24, 47) (24, 50)
(25, 59) (25, 42) (25, 70) (25, 72) (26, 45) (26, 30) (26, 87) (26, 80) (26, 50) (27, 77) (27, 58) (27, 60) (27, 29) (28, 36) (28, 59)
(28, 88) (29, 31) (29, 33) (29, 56) (29, 55) (29, 78) (29, 73) (29, 82) (30, 34) (31, 58) (31, 70) (32, 78) (33, 73) (33, 68) (34, 48)
(34, 88) (34, 51) (34, 66) (34, 54) (35, 43) (35, 56) (36, 44) (36, 39) (37, 57) (37, 71) (38, 47) (38, 58) (38, 88) (39, 74) (39, 48)
(39, 45) (39, 42) (40, 60) (40, 59) (40, 71) (40, 76) (41, 79) (42, 60) (42, 62) (42, 72) (43, 57) (44, 47) (44, 74) (44, 75) (44, 82)
(44, 70) (45, 68) (45, 67) (45, 71) (46, 84) (47, 56) (48, 75) (49, 71) (49, 58) (50, 76) (51, 55) (51, 87) (51, 79) (52, 78) (56, 69)
(56, 83) (56, 79) (57, 76) (57, 70) (57, 72) (57, 78) (59, 70) (60, 80) (60, 76) (60, 77) (60, 75) (61, 74) (61, 77) (62, 84) (62, 66)
(66, 69) (68, 88) (70, 88) (70, 76) (71, 81) (71, 72) (71, 86) (72, 75) (73, 82) (73, 83) (74, 85). Is there a cycle in this graph?
GraphWiz:
There are many possible ways to find a cycle in this graph, but one example is: Starting from node 0, we can go to node 73 (via edge 0-73), then to node 82 (via edge 73-82),
then to node 44 (via edge 82-44), then to node 17 (via edge 44-17), then to node 36 (via edge 17-36), then to node 28 (via edge 36-28),
and back to node 0 (via edge 28-0). This forms a cycle [0-73-82-44-17-36-28-0] without revisiting any edge. So, there is a cycle in this graph.
```
|
Nuo97/COMEDY_7B | Nuo97 | 2024-02-27T04:09:34Z | 15 | 1 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"question-answering",
"zh",
"dataset:Nuo97/Dolphin-DPO",
"arxiv:2402.11975",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | question-answering | 2024-02-18T13:19:53Z | ---
license: apache-2.0
datasets:
- Nuo97/Dolphin-DPO
language:
- zh
metrics:
- bleu
pipeline_tag: question-answering
---
# COMEDY: COmpressive Memory-Enhanced Dialogue sYstems framework.
Github: https://github.com/nuochenpku/COMEDY
Paper: https://arxiv.org/abs/2402.11975.pdf
<br>
<div align="center">
<img src="comedy.png" width="40%" title="Introduction Figure">
</div>
### Task: Long-Term Conversation Dialogue Generation
Different from previous retrieval-based methods, COMEDY doesn't rely on any **retrieval module or database**.
Instead, COMEDY adopts a groundbreaking ''**One-for-All**'' approach, utilizing a single, unified model to manage the entire process from memory generation, compression to final response generation for long-term memory dialogue generation.
- COMEDY firstly involves distilling session-specific memory from past dialogues, encompassing fine-grained session summaries, including event recaps, and detailed user and bot portraits;
- In a break from traditional systems, COMEDY eschews the use of a memory database for storing these insights. Instead, it reprocesses and condenses memories from all past interactions, forming a *Compressive Memory*: The first part is the **concise events** that have occurred throughout all the conversations, creating a historical narrative that the system can draw upon. The second and third parts consist of a **detailed user profile** and the **dynamic relationship changes** between the user and chatbot across sessions, both derived from past conversational events.
- Finally, COMEDY skillfully integrates this compressive memory into ongoing conversations, enabling contextually memory-enhanced interactions.
### Training Dataset
**Dolphin**, the biggest Chinese long-term conversation dataset, from actual online user-chatbot interactions.
This dataset contains three tasks:
**Session-Level Memory Summarization**;
**Memory Compression**;
**Memory-Grounded Response Generation**,
comprising an extensive collection of 100k samples.
Dolphin is available at [**Dolphin**](https://huggingface.co/datasets/Nuo97/Dolphin-DPO)
### Training Strategy
Our training strategies include two stages: Mixed-task training and DPO Alignment.
<br>
<div align="center">
<img src="training_strategy.png" width="100%" title="Introduction Figure">
</div>
|
Nuo97/COMEDY_13B_DPO | Nuo97 | 2024-02-27T04:09:13Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"question-answering",
"zh",
"dataset:Nuo97/Dolphin-DPO",
"arxiv:2402.11975",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | question-answering | 2024-02-18T13:19:34Z | ---
license: apache-2.0
datasets:
- Nuo97/Dolphin-DPO
language:
- zh
metrics:
- bleu
pipeline_tag: question-answering
---
# COMEDY: COmpressive Memory-Enhanced Dialogue sYstems framework.
Github: https://github.com/nuochenpku/COMEDY
Paper: https://arxiv.org/abs/2402.11975.pdf
<br>
<div align="center">
<img src="comedy.png" width="40%" title="Introduction Figure">
</div>
### Task: Long-Term Conversation Dialogue Generation
Different from previous retrieval-based methods, COMEDY doesn't rely on any **retrieval module or database**.
Instead, COMEDY adopts a groundbreaking ''**One-for-All**'' approach, utilizing a single, unified model to manage the entire process from memory generation, compression to final response generation for long-term memory dialogue generation.
- COMEDY firstly involves distilling session-specific memory from past dialogues, encompassing fine-grained session summaries, including event recaps, and detailed user and bot portraits;
- In a break from traditional systems, COMEDY eschews the use of a memory database for storing these insights. Instead, it reprocesses and condenses memories from all past interactions, forming a *Compressive Memory*: The first part is the **concise events** that have occurred throughout all the conversations, creating a historical narrative that the system can draw upon. The second and third parts consist of a **detailed user profile** and the **dynamic relationship changes** between the user and chatbot across sessions, both derived from past conversational events.
- Finally, COMEDY skillfully integrates this compressive memory into ongoing conversations, enabling contextually memory-enhanced interactions.
### Training Dataset
**Dolphin**, the biggest Chinese long-term conversation dataset, from actual online user-chatbot interactions.
This dataset contains three tasks:
**Session-Level Memory Summarization**;
**Memory Compression**;
**Memory-Grounded Response Generation**,
comprising an extensive collection of 100k samples.
Dolphin is available at [**Dolphin**](https://huggingface.co/datasets/Nuo97/Dolphin-DPO)
### Training Strategy
Our training strategies include two stages: Mixed-task training and DPO Alignment.
<br>
<div align="center">
<img src="training_strategy.png" width="90%" title="Introduction Figure">
</div>
|
Kishan11/nepali-summry | Kishan11 | 2024-02-27T04:07:03Z | 105 | 0 | transformers | [
"transformers",
"safetensors",
"mt5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-02-27T04:06:17Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
SyedShaheer/distilbart-cnn-12-6_TUNED | SyedShaheer | 2024-02-27T03:57:40Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"summarization",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | summarization | 2024-02-27T03:51:27Z | ---
language:
- en
metrics:
- rouge
pipeline_tag: summarization
--- |
AlexAgafontsev/bert-finetuned-sem_eval-english | AlexAgafontsev | 2024-02-27T03:47:58Z | 110 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-02-24T14:45:02Z | ---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
metrics:
- f1
- accuracy
model-index:
- name: bert-finetuned-sem_eval-english
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-sem_eval-english
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9943
- F1: 0.2299
- Roc Auc: 0.5451
- Accuracy: 0.0790
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:|
| No log | 1.0 | 312 | 2.9943 | 0.2299 | 0.5451 | 0.0790 |
### Framework versions
- Transformers 4.38.1
- Pytorch 2.2.0
- Datasets 2.17.0
- Tokenizers 0.15.1
|
macto/exllamav2-test | macto | 2024-02-27T03:44:58Z | 3 | 0 | transformers | [
"transformers",
"safetensors",
"qwen",
"text-generation",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"4-bit",
"gptq",
"region:us"
] | text-generation | 2024-02-27T03:39:24Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
FlippyCode/ppo-Huggy | FlippyCode | 2024-02-27T03:44:16Z | 11 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2024-02-27T03:42:41Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: FlippyCode/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
tony4194/distilBERT-infoExtract-v2 | tony4194 | 2024-02-27T03:41:32Z | 111 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"token-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-cased",
"base_model:finetune:distilbert/distilbert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-02-27T03:12:48Z | ---
license: apache-2.0
base_model: distilbert-base-cased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilBERT-infoExtract-v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilBERT-infoExtract-v2
This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0704
- Precision: 0.9146
- Recall: 0.9350
- F1: 0.9247
- Accuracy: 0.9831
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0989 | 1.0 | 1756 | 0.0877 | 0.8808 | 0.9152 | 0.8977 | 0.9748 |
| 0.0498 | 2.0 | 3512 | 0.0697 | 0.9027 | 0.9286 | 0.9155 | 0.9814 |
| 0.0328 | 3.0 | 5268 | 0.0704 | 0.9146 | 0.9350 | 0.9247 | 0.9831 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
|
cerkut/mps-tuned-gtzan | cerkut | 2024-02-27T03:40:32Z | 162 | 0 | transformers | [
"transformers",
"safetensors",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:gtzan",
"base_model:ntu-spml/distilhubert",
"base_model:finetune:ntu-spml/distilhubert",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | audio-classification | 2024-02-27T03:30:41Z | ---
license: apache-2.0
base_model: ntu-spml/distilhubert
tags:
- generated_from_trainer
datasets:
- gtzan
metrics:
- accuracy
model-index:
- name: mps-tuned-gtzan
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: gtzan
type: gtzan
config: all
split: None
args: all
metrics:
- name: Accuracy
type: accuracy
value: 0.75
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mps-tuned-gtzan
This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the gtzan dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2018
- Accuracy: 0.75
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 4 | 2.2018 | 0.75 |
### Framework versions
- Transformers 4.38.1
- Pytorch 2.2.1
- Datasets 2.17.0
- Tokenizers 0.15.2
|
thu-coai/ShieldLM-13B-baichuan2 | thu-coai | 2024-02-27T03:35:21Z | 8 | 3 | transformers | [
"transformers",
"safetensors",
"baichuan",
"text-generation",
"custom_code",
"en",
"zh",
"arxiv:2402.16444",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-26T08:02:18Z | ---
license: mit
language:
- en
- zh
---
## Introduction
The ShieldLM model ([paper link](https://arxiv.org/abs/2402.16444)) initialized from [Baichuan2-13B-Chat](https://huggingface.co/baichuan-inc/Baichuan2-13B-Chat). ShieldLM is a bilingual (Chinese and English) safety detector that mainly aims to help to detect safety issues in LLMs' generations. It aligns with general human safety standards, supports fine-grained customizable detection rules, and provides explanations for its decisions.
Refer to our [github repository](https://github.com/thu-coai/ShieldLM) for more detailed information.
## Usage
Please refer to our [github repository](https://github.com/thu-coai/ShieldLM) for the detailed usage instructions.
## Performance
ShieldLM demonstrates impressive detection performance across 4 ID and OOD test sets, compared to strong baselines such as GPT-4, Llama Guard and Perspective API.
Refer to [our paper](https://arxiv.org/abs/2402.16444) for more detailed evaluation results. |
thu-coai/ShieldLM-6B-chatglm3 | thu-coai | 2024-02-27T03:34:10Z | 11 | 3 | transformers | [
"transformers",
"safetensors",
"chatglm",
"feature-extraction",
"custom_code",
"en",
"zh",
"arxiv:2402.16444",
"license:mit",
"region:us"
] | feature-extraction | 2024-02-26T11:10:42Z | ---
license: mit
language:
- en
- zh
---
## Introduction
The ShieldLM model ([paper link](https://arxiv.org/abs/2402.16444)) initialized from [chatglm3-6b](https://huggingface.co/THUDM/chatglm3-6b). ShieldLM is a bilingual (Chinese and English) safety detector that mainly aims to help to detect safety issues in LLMs' generations. It aligns with general human safety standards, supports fine-grained customizable detection rules, and provides explanations for its decisions.
Refer to our [github repository](https://github.com/thu-coai/ShieldLM) for more detailed information.
## Usage
Please refer to our [github repository](https://github.com/thu-coai/ShieldLM) for the detailed usage instructions.
## Performance
ShieldLM demonstrates impressive detection performance across 4 ID and OOD test sets, compared to strong baselines such as GPT-4, Llama Guard and Perspective API.
Refer to [our paper](https://arxiv.org/abs/2402.16444) for more detailed evaluation results. |
OwOpeepeepoopoo/easy_america2 | OwOpeepeepoopoo | 2024-02-27T03:31:04Z | 110 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-27T03:28:32Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
TideDra/llava-v1.6-mistral-7b-processor | TideDra | 2024-02-27T03:29:15Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-02-27T03:29:13Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Ronysalem/arabert-dialect | Ronysalem | 2024-02-27T03:17:22Z | 106 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:asafaya/bert-base-arabic",
"base_model:finetune:asafaya/bert-base-arabic",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-02-27T03:10:47Z | ---
base_model: asafaya/bert-base-arabic
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [asafaya/bert-base-arabic](https://huggingface.co/asafaya/bert-base-arabic) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5491
- Accuracy: 0.8431
- F1: 0.8420
- Precision: 0.8419
- Recall: 0.8431
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.581 | 1.0 | 1847 | 0.4896 | 0.8299 | 0.8267 | 0.8316 | 0.8299 |
| 0.3579 | 2.0 | 3694 | 0.4723 | 0.8403 | 0.8376 | 0.8429 | 0.8403 |
| 0.1966 | 3.0 | 5541 | 0.5491 | 0.8431 | 0.8420 | 0.8419 | 0.8431 |
### Framework versions
- Transformers 4.38.1
- Pytorch 2.1.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
|
ThomasFG/E3-0-67.5-67.5 | ThomasFG | 2024-02-27T03:12:09Z | 77 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:openai/whisper-small.en",
"base_model:finetune:openai/whisper-small.en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-02-26T18:47:57Z | ---
license: apache-2.0
base_model: openai/whisper-small.en
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: 2024-02-26_19-47-53
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 2024-02-26_19-47-53
This model is a fine-tuned version of [openai/whisper-small.en](https://huggingface.co/openai/whisper-small.en) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2553
- Wer: 9.1693
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2397 | 1.0 | 711 | 0.2432 | 9.3307 |
| 0.0921 | 2.0 | 1422 | 0.2449 | 9.4052 |
| 0.0642 | 3.0 | 2133 | 0.2553 | 9.1693 |
### Framework versions
- Transformers 4.37.2
- Pytorch 1.13.1+cu116
- Datasets 2.17.0
- Tokenizers 0.15.2
|
cgato/Thespis-CurtainCall-7b-v0.2.1-GGUF | cgato | 2024-02-27T03:10:29Z | 8 | 0 | null | [
"gguf",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null | 2024-02-27T01:39:48Z | ---
license: cc-by-nc-4.0
---
This model is the first in a series of experiments to make my models a bit smarter. Its nowhere near done, but my initial testing was good so I'm uploading so people can check it out.
Datasets Used:
* Dolphin
* Ultrachat
* Capybara
* Augmental
* ToxicQA
* Magiccoder-Evol-Instruct-110k
* Yahoo Answers
* OpenOrca
* Airoboros 3.1
* grimulkan/physical-reasoning and theory-of-mind
## Prompt Format: Chat ( The default Ooba template and Silly Tavern Template )
```
{System Prompt}
Username: {Input}
BotName: {Response}
Username: {Input}
BotName: {Response}
```
## Recommended Silly Tavern Preset -> (Temp: 1.25, MinP: 0.1, RepPen: 1.03)
## Recommended Kobold Horde Preset -> MinP |
gingdev/ictu-vinallama-gguf | gingdev | 2024-02-27T03:09:41Z | 3 | 0 | transformers | [
"transformers",
"gguf",
"text-generation",
"vi",
"dataset:gingdev/ictu-lecturers",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-02-27T02:53:01Z | ---
license: mit
datasets:
- gingdev/ictu-lecturers
language:
- vi
metrics:
- accuracy
library_name: transformers
pipeline_tag: text-generation
--- |
djsull/sentence-roberta-multitask | djsull | 2024-02-27T02:51:30Z | 11 | 2 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2024-02-27T02:40:28Z | ---
library_name: sentence-transformers
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
pipeline_tag: sentence-similarity
---
# djsull/sentence-roberta-multitask
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('djsull/sentence-roberta-multitask')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('djsull/sentence-roberta-multitask')
model = AutoModel.from_pretrained('djsull/sentence-roberta-multitask')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=djsull/sentence-roberta-multitask)
## Training
The model was trained with the parameters:
**DataLoader**:
`sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 8885 with parameters:
```
{'batch_size': 64}
```
**Loss**:
`sentence_transformers.losses.MatryoshkaLoss.MatryoshkaLoss` with parameters:
```
{'loss': 'MultipleNegativesRankingLoss', 'matryoshka_dims': [768, 256], 'matryoshka_weights': [1, 1]}
```
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 719 with parameters:
```
{'batch_size': 8, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MatryoshkaLoss.MatryoshkaLoss` with parameters:
```
{'loss': 'CosineSimilarityLoss', 'matryoshka_dims': [768, 256], 'matryoshka_weights': [1, 1]}
```
Parameters of the fit()-Method:
```
{
"epochs": 5,
"evaluation_steps": 1000,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 360,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
TencentARC/MetaMath-Mistral-Pro | TencentARC | 2024-02-27T02:48:42Z | 13 | 5 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"en",
"dataset:meta-math/MetaMathQA",
"arxiv:2401.02415",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-26T13:39:38Z | ---
license: apache-2.0
datasets:
- meta-math/MetaMathQA
language:
- en
metrics:
- accuracy
---
see our paper in https://arxiv.org/abs/2401.02415
View the project page:
https://github.com/TencentARC/LLaMA-Pro
## Model Details
MetaMath-Mistral-Pro is fully fine-tuned on the MetaMathQA datasets and based on the powerful Mistral-Pro model.
## Model Usage
The model is trained to use the following format (note the newlines):
```
<|user|>
Your message here!
<|assistant|>
```
For best results, format all inputs in this manner. **Make sure to include a newline after `<|assistant|>`, this can affect generation quality quite a bit.**
## Experiments
| Model | GSM8k Pass@1 | MATH Pass@1 |
|---------------------|--------------|-------------|
| MPT-7B | 6.8 | 3.0 |
| Falcon-7B | 6.8 | 2.3 |
| LLaMA-1-7B | 11.0 | 2.9 |
| LLaMA-2-7B | 14.6 | 2.5 |
| MPT-30B | 15.2 | 3.1 |
| LLaMA-1-13B | 17.8 | 3.9 |
| GPT-Neo-2.7B | 19.5 | -- |
| Falcon-40B | 19.6 | 2.5 |
| Baichuan-chat-13B | 23.9 | -- |
| Vicuna-v1.3-13B | 27.6 | -- |
| LLaMA-2-13B | 28.7 | 3.9 |
| InternLM-7B | 31.2 | -- |
| ChatGLM-2-6B | 32.4 | -- |
| GPT-J-6B | 34.9 | -- |
| LLaMA-1-33B | 35.6 | 3.9 |
| LLaMA-2-34B | 42.2 | 6.24 |
| RFT-7B | 50.3 | -- |
| LLaMA-1-65B | 50.9 | 10.6 |
| Qwen-7B | 51.6 | -- |
| WizardMath-7B | 54.9 | 10.7 |
| LLaMA-2-70B | 56.8 | 13.5 |
| WizardMath-13B | 63.9 | 14.0 |
| MAmmoTH-7B (COT) | 50.5 | 10.4 |
| MAmmoTH-7B (POT+COT)| 53.6 | 31.5 |
| Arithmo-Mistral-7B | 74.7 | 25.3 |
| MetaMath-7B | 66.5 | 19.8 |
| MetaMath-13B | 72.3 | 22.4 |
| MetaMath-Mistral-7B | 77.7 | 28.2 |
| MetaMath-Llemma-7B | 69.2 | 30.0 |
| 🔥 **MetaMath-Mistral-Pro** | **78.4** | **30.3** |
## Citation
```bibtex
@article{wu2024llama,
title={Llama pro: Progressive llama with block expansion},
author={Wu, Chengyue and Gan, Yukang and Ge, Yixiao and Lu, Zeyu and Wang, Jiahao and Feng, Ye and Luo, Ping and Shan, Ying},
journal={arXiv preprint arXiv:2401.02415},
year={2024}
}
``` |
liminerity/mm3-slerp-3b | liminerity | 2024-02-27T02:44:52Z | 110 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"liminerity/mm1-slerp-3b-fine-tuned",
"liminerity/mm2-slerp-3b-fine-tuned",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-27T02:43:16Z | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- liminerity/mm1-slerp-3b-fine-tuned
- liminerity/mm2-slerp-3b-fine-tuned
---
# mm3-slerp-3b
mm3-slerp-3b is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):
* [liminerity/mm1-slerp-3b-fine-tuned](https://huggingface.co/liminerity/mm1-slerp-3b-fine-tuned)
* [liminerity/mm2-slerp-3b-fine-tuned](https://huggingface.co/liminerity/mm2-slerp-3b-fine-tuned)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: liminerity/mm1-slerp-3b-fine-tuned
layer_range: [0, 40]
- model: liminerity/mm2-slerp-3b-fine-tuned
layer_range: [0, 40]
merge_method: slerp
base_model: liminerity/mm1-slerp-3b-fine-tuned
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
``` |
Prasadrao/twitter-roberta-large-go-emotions | Prasadrao | 2024-02-27T02:38:12Z | 4 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:cardiffnlp/twitter-roberta-large-2022-154m",
"base_model:finetune:cardiffnlp/twitter-roberta-large-2022-154m",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-02-26T14:53:45Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
base_model: cardiffnlp/twitter-roberta-large-2022-154m
model-index:
- name: twitter-roberta-large-go-emotions
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# twitter-roberta-large-go-emotions
This model is a fine-tuned version of [cardiffnlp/twitter-roberta-large-2022-154m](https://huggingface.co/cardiffnlp/twitter-roberta-large-2022-154m) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0816
- Accuracy: 0.4644
- Precision: 0.5709
- Recall: 0.5184
- F1: 0.5123
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| No log | 1.0 | 340 | 0.0889 | 0.4342 | 0.4653 | 0.4303 | 0.4243 |
| 0.1082 | 2.0 | 680 | 0.0819 | 0.4521 | 0.5253 | 0.4991 | 0.4856 |
| 0.1082 | 3.0 | 1020 | 0.0816 | 0.4644 | 0.5709 | 0.5184 | 0.5123 |
### Framework versions
- Transformers 4.37.0
- Pytorch 2.1.2
- Datasets 2.15.0
- Tokenizers 0.15.1
|
anhtranhong/fingpt-mt_llama2-7b_lora_with_fiqa-qa-v1.1 | anhtranhong | 2024-02-27T02:36:38Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-02-27T02:36:26Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
TencentARC/Mistral_Pro_8B_v0.1 | TencentARC | 2024-02-27T02:31:53Z | 349 | 66 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"en",
"dataset:HuggingFaceTB/cosmopedia",
"dataset:EleutherAI/proof-pile-2",
"dataset:bigcode/the-stack-dedup",
"dataset:math-ai/AutoMathText",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-22T14:47:03Z | ---
license: apache-2.0
datasets:
- HuggingFaceTB/cosmopedia
- EleutherAI/proof-pile-2
- bigcode/the-stack-dedup
- math-ai/AutoMathText
language:
- en
metrics:
- accuracy
- code_eval
---
# Mistral-Pro-8B Model Card
## Model Description
Mistral-Pro is a progressive version of the original [Mistral](https://huggingface.co/mistralai/Mistral-7B-v0.1) model, enhanced by the addition of Transformer blocks. It specializes in integrating both general language understanding and domain-specific knowledge, particularly in programming and mathematics.
## Development and Training
Developed by Tencent's ARC Lab, Mistral-Pro is an 8 billion parameter model. It's an expansion of Mistral-7B, further trained on code and math corpora.
## Intended Use
This model is designed for a wide range of NLP tasks, with a focus on programming, mathematics, and general language tasks. It suits scenarios requiring integration of natural and programming languages.
## Performance
Mistral_Pro_8B_v0.1 showcases superior performance on a range of benchmarks. It enhances the code and math performance of Mistral. Furthermore, it matches the performance of the recently dominant model, [Gemma](https://huggingface.co/google/gemma-7b).
### Overall Performance on Languages, math and code tasks
| Model | ARC | Hellaswag | MMLU | TruthfulQA | Winogrande | GSM8K | HumanEval |
| :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: |
| Gemma-7B | 61.9 | 82.2 | 64.6 | 44.8 | 79.0 | 50.9 | 32.3 |
| Mistral-7B | 60.8 | 83.3 | 62.7 | 42.6 | 78.0 | 39.2 | 28.7 |
| Mistral_Pro_8B_v0.1 | 63.2 | 82.6 | 60.6 | 48.3 | 78.9 | 50.6 | 32.9 |
## Limitations
While Mistral-Pro addresses some limitations of previous models in the series, it may still encounter challenges specific to highly specialized domains or tasks.
## Ethical Considerations
Users should be aware of potential biases in the model and use it responsibly, considering its impact on various applications. |
Microbee/Ansost-Disease | Microbee | 2024-02-27T02:31:20Z | 106 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"autotrain",
"dataset:autotrain-lgrcd-2wctt/autotrain-data",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-02-26T07:42:01Z |
---
tags:
- autotrain
- text-classification
widget:
- text: "Presenile dementia"
- text: "physiopathological"
datasets:
- autotrain-lgrcd-2wctt/autotrain-data
---
# Model Trained Using AutoTrain
- Problem type: Text Classification
## Validation Metrics
loss: 0.422727108001709
f1: 0.9172185430463576
precision: 0.9121844127332601
recall: 0.9223085460599334
auc: 0.9308079638847088
accuracy: 0.8777506112469438
|
Subsets and Splits